Handwriting generates variable visual output to facilitate symbol learning.
Li, Julia X; James, Karin H
2016-03-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Handwriting generates variable visual input to facilitate symbol learning
Li, Julia X.; James, Karin H.
2015-01-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913
Dasgupta, Aritra; Poco, Jorge; Wei, Yaxing; ...
2015-03-16
Evaluation methodologies in visualization have mostly focused on how well the tools and techniques cater to the analytical needs of the user. While this is important in determining the effectiveness of the tools and advancing the state-of-the-art in visualization research, a key area that has mostly been overlooked is how well established visualization theories and principles are instantiated in practice. This is especially relevant when domain experts, and not visualization researchers, design visualizations for analysis of their data or for broader dissemination of scientific knowledge. There is very little research on exploring the synergistic capabilities of cross-domain collaboration between domainmore » experts and visualization researchers. To fill this gap, in this paper we describe the results of an exploratory study of climate data visualizations conducted in tight collaboration with a pool of climate scientists. The study analyzes a large set of static climate data visualizations for identifying their shortcomings in terms of visualization design. The outcome of the study is a classification scheme that categorizes the design problems in the form of a descriptive taxonomy. The taxonomy is a first attempt for systematically categorizing the types, causes, and consequences of design problems in visualizations created by domain experts. We demonstrate the use of the taxonomy for a number of purposes, such as, improving the existing climate data visualizations, reflecting on the impact of the problems for enabling domain experts in designing better visualizations, and also learning about the gaps and opportunities for future visualization research. We demonstrate the applicability of our taxonomy through a number of examples and discuss the lessons learnt and implications of our findings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Poco, Jorge; Wei, Yaxing
Evaluation methodologies in visualization have mostly focused on how well the tools and techniques cater to the analytical needs of the user. While this is important in determining the effectiveness of the tools and advancing the state-of-the-art in visualization research, a key area that has mostly been overlooked is how well established visualization theories and principles are instantiated in practice. This is especially relevant when domain experts, and not visualization researchers, design visualizations for analysis of their data or for broader dissemination of scientific knowledge. There is very little research on exploring the synergistic capabilities of cross-domain collaboration between domainmore » experts and visualization researchers. To fill this gap, in this paper we describe the results of an exploratory study of climate data visualizations conducted in tight collaboration with a pool of climate scientists. The study analyzes a large set of static climate data visualizations for identifying their shortcomings in terms of visualization design. The outcome of the study is a classification scheme that categorizes the design problems in the form of a descriptive taxonomy. The taxonomy is a first attempt for systematically categorizing the types, causes, and consequences of design problems in visualizations created by domain experts. We demonstrate the use of the taxonomy for a number of purposes, such as, improving the existing climate data visualizations, reflecting on the impact of the problems for enabling domain experts in designing better visualizations, and also learning about the gaps and opportunities for future visualization research. We demonstrate the applicability of our taxonomy through a number of examples and discuss the lessons learnt and implications of our findings.« less
Fine-grained visual marine vessel classification for coastal surveillance and defense applications
NASA Astrophysics Data System (ADS)
Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.
Scene and human face recognition in the central vision of patients with glaucoma
Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole
2018-01-01
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization
Parraga, C. Alejandro; Akbarinia, Arash
2016-01-01
The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms. PMID:26954691
NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization.
Parraga, C Alejandro; Akbarinia, Arash
2016-01-01
The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms.
Space-by-time manifold representation of dynamic facial expressions for emotion categorization
Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.
2016-01-01
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521
Universal and adapted vocabularies for generic visual categorization.
Perronnin, Florent
2008-07-01
Generic Visual Categorization (GVC) is the pattern classification problem which consists in assigning labels to an image based on its semantic content. This is a challenging task as one has to deal with inherent object/scene variations as well as changes in viewpoint, lighting and occlusion. Several state-of-the-art GVC systems use a vocabulary of visual terms to characterize images with a histogram of visual word counts. We propose a novel practical approach to GVC based on a universal vocabulary, which describes the content of all the considered classes of images, and class vocabularies obtained through the adaptation of the universal vocabulary using class-specific data. The main novelty is that an image is characterized by a set of histograms - one per class - where each histogram describes whether the image content is best modeled by the universal vocabulary or the corresponding class vocabulary. This framework is applied to two types of local image features: low-level descriptors such as the popular SIFT and high-level histograms of word co-occurrences in a spatial neighborhood. It is shown experimentally on two challenging datasets (an in-house database of 19 categories and the PASCAL VOC 2006 dataset) that the proposed approach exhibits state-of-the-art performance at a modest computational cost.
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading
O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.
2017-01-01
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363
Taxonomic and ad hoc categorization within the two cerebral hemispheres.
Shen, Yeshayahu; Aharoni, Bat-El; Mashal, Nira
2015-01-01
A typicality effect refers to categorization which is performed more quickly or more accurately for typical than for atypical members of a given category. Previous studies reported a typicality effect for category members presented in the left visual field/right hemisphere (RH), suggesting that the RH applies a similarity-based categorization strategy. However, findings regarding the typicality effect within the left hemisphere (LH) are less conclusive. The current study tested the pattern of typicality effects within each hemisphere for both taxonomic and ad hoc categories, using words presented to the left or right visual fields. Experiment 1 tested typical and atypical members of taxonomic categories as well as non-members, and Experiment 2 tested typical and atypical members of ad hoc categories as well as non-members. The results revealed a typicality effect in both hemispheres and in both types of categories. Furthermore, the RH categorized atypical stimuli more accurately than did the LH. Our findings suggest that both hemispheres rely on a similarity-based categorization strategy, but the coarse semantic coding of the RH seems to facilitate the categorization of atypical members.
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Modelling individual difference in visual categorization.
Shen, Jianhong; Palmeri, Thomas J
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.
Modelling individual difference in visual categorization
Shen, Jianhong; Palmeri, Thomas J.
2016-01-01
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. PMID:28154496
Mira: Libro de apresto (Look: Preparatory Book).
ERIC Educational Resources Information Center
Martinez, Emiliano; And Others
This primer picture book may be used in various games and activities to extend the child's vocabulary and to provide pre-reading practice in letter and sound identification, categorization, and audio-visual discrimination. (Author/SK)
PATTERNS OF CLINICALLY SIGNIFICANT COGNITIVE IMPAIRMENT IN HOARDING DISORDER.
Mackin, R Scott; Vigil, Ofilio; Insel, Philip; Kivowitz, Alana; Kupferman, Eve; Hough, Christina M; Fekri, Shiva; Crothers, Ross; Bickford, David; Delucchi, Kevin L; Mathews, Carol A
2016-03-01
The cognitive characteristics of individuals with hoarding disorder (HD) are not well understood. Existing studies are relatively few and somewhat inconsistent but suggest that individuals with HD may have specific dysfunction in the cognitive domains of categorization, speed of information processing, and decision making. However, there have been no studies evaluating the degree to which cognitive dysfunction in these domains reflects clinically significant cognitive impairment (CI). Participants included 78 individuals who met DSM-V criteria for HD and 70 age- and education-matched controls. Cognitive performance on measures of memory, attention, information processing speed, abstract reasoning, visuospatial processing, decision making, and categorization ability was evaluated for each participant. Rates of clinical impairment for each measure were compared, as were age- and education-corrected raw scores for each cognitive test. HD participants showed greater incidence of CI on measures of visual memory, visual detection, and visual categorization relative to controls. Raw-score comparisons between groups showed similar results with HD participants showing lower raw-score performance on each of these measures. In addition, in raw-score comparisons HD participants also demonstrated relative strengths compared to control participants on measures of verbal and visual abstract reasoning. These results suggest that HD is associated with a pattern of clinically significant CI in some visually mediated neurocognitive processes including visual memory, visual detection, and visual categorization. Additionally, these results suggest HD individuals may also exhibit relative strengths, perhaps compensatory, in abstract reasoning in both verbal and visual domains. © 2015 Wiley Periodicals, Inc.
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.
Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer
Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954
A physiologically based nonhomogeneous Poisson counter model of visual identification.
Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren
2018-04-30
A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Koivisto, Mika; Kahila, Ella
2017-04-01
Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interval-level measurement with visual analogue scales in Internet-based research: VAS Generator.
Reips, Ulf-Dietrich; Funke, Frederik
2008-08-01
The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible.
Similarity relations in visual search predict rapid visual categorization
Mohan, Krithika; Arun, S. P.
2012-01-01
How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947
Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi
2014-02-01
This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.
The Timing of Visual Object Categorization
Mack, Michael L.; Palmeri, Thomas J.
2011-01-01
An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480
The Characteristics and Limits of Rapid Visual Categorization
Fabre-Thorpe, Michèle
2011-01-01
Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180
Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim
2012-01-01
The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.
Weinreich, Spencer J
2015-01-01
This paper seeks to explore how culturally and religiously significant animals could shape discourses in which they were deployed, taking the crocodile as its case study. Beginning with the textual and visual traditions linking the crocodile with Africa and the Middle East, I read sixteenth- and seventeenth-century travel narratives categorizing American reptiles as "crocodiles" rather than "alligators," as attempts to mitigate the disruptive strangeness of the Americas. The second section draws on Ann Blair's study of "Mosaic Philosophy" to examine scholarly debates over the taxonomic identity of the biblical Leviathan. I argue that the language and analytical tools of natural philosophy progressively permeated religious discourse. Finally, a survey of more than 25 extant examples of the premodern practice of displaying crocodiles in churches, as well as other crocodilian elements in Christian iconography, provides an explanation for the ubiquity of crocodiles in Wunderkammern, as natural philosophy appropriated ecclesial visual vocabularies.
The functional architecture of the ventral temporal cortex and its role in categorization
Grill-Spector, Kalanit; Weiner, Kevin S.
2014-01-01
Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370
Pavlides, Michael; Birks, Jacqueline; Fryer, Eve; Delaney, David; Sarania, Nikita; Banerjee, Rajarshi; Neubauer, Stefan; Barnes, Eleanor; Fleming, Kenneth A; Wang, Lai Mun
2017-04-01
The aim of the study was to investigate the interobserver agreement for categorical and quantitative scores of liver fibrosis. Sixty-five consecutive biopsy specimens from patients with mixed liver disease etiologies were assessed by three pathologists using the Ishak and nonalcoholic steatohepatitis Clinical Research Network (NASH CRN) scoring systems, and the fibrosis area (collagen proportionate area [CPA]) was estimated by visual inspection (visual-CPA). A subset of 20 biopsy specimens was analyzed using digital imaging analysis (DIA) for the measurement of CPA (DIA-CPA). The bivariate weighted κ between any two pathologists ranged from 0.57 to 0.67 for Ishak staging and from 0.47 to 0.57 for the NASH CRN staging. Bland-Altman analysis showed poor agreement between all possible pathologist pairings for visual-CPA but good agreement between all pathologist pairings for DIA-CPA. There was good agreement between the two pathologists who assessed biopsy specimens by visual-CPA and DIA-CPA. The intraclass correlation coefficient, which is equivalent to the κ statistic for continuous variables, was 0.78 for visual-CPA and 0.97 for DIA-CPA. These results suggest that DIA-CPA is the most robust method for assessing liver fibrosis followed by visual-CPA. Categorical scores perform less well than both the quantitative CPA scores assessed here. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Hemispheric Predominance Assessment of Phonology and Semantics: A Divided Visual Field Experiment
ERIC Educational Resources Information Center
Cousin, Emilie; Peyrin, Carole; Baciu, Monica
2006-01-01
The aim of the present behavioural experiment was to evaluate the most lateralized among two phonological (phoneme vs. rhyme detection) and the most lateralized among two semantic ("living" vs. "edible" categorization) tasks, within the dominant hemisphere for language. The reason of addressing this question was a practical one: to evaluate the…
The effect of integration masking on visual processing in perceptual categorization.
Hélie, Sébastien
2017-08-01
Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.
Harel, Assaf; Ullman, Shimon; Harari, Danny; Bentin, Shlomo
2011-07-28
Visual expertise is usually defined as the superior ability to distinguish between exemplars of a homogeneous category. Here, we ask how real-world expertise manifests at basic-level categorization and assess the contribution of stimulus-driven and top-down knowledge-based factors to this manifestation. Car experts and novices categorized computer-selected image fragments of cars, airplanes, and faces. Within each category, the fragments varied in their mutual information (MI), an objective quantifiable measure of feature diagnosticity. Categorization of face and airplane fragments was similar within and between groups, showing better performance with increasing MI levels. Novices categorized car fragments more slowly than face and airplane fragments, while experts categorized car fragments as fast as face and airplane fragments. The experts' advantage with car fragments was similar across MI levels, with similar functions relating RT with MI level for both groups. Accuracy was equal between groups for cars as well as faces and airplanes, but experts' response criteria were biased toward cars. These findings suggest that expertise does not entail only specific perceptual strategies. Rather, at the basic level, expertise manifests as a general processing advantage arguably involving application of top-down mechanisms, such as knowledge and attention, which helps experts to distinguish between object categories. © ARVO
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Rennels, Jennifer L.; Kayl, Andrea J.; Langlois, Judith H.; Davis, Rachel E.; Orlewicz, Mateusz
2015-01-01
Infants typically have a preponderance of experience with females, resulting in visual preferences for female faces, particularly high attractive females, and in better categorization of female relative to male faces. We examined whether these abilities generalized to infants’ visual preferences for and categorization of perceptually similar male faces (i.e., low masculine males). Twelve-month-olds visually preferred high attractive relative to low attractive male faces within low masculine pairs only (Exp. 1), but did not visually prefer low masculine relative to high masculine male faces (Exp. 2). Lack of visual preferences was not due to infants’ inability to discriminate between the male faces (Exps. 3 & 4). Twelve-month-olds categorized low masculine, but not high masculine, male faces (Exp. 5). Infants could individuate male faces within each of the categories (Exp. 6). Twelve-month-olds’ attention toward and categorization of male faces may reflect a generalization of their female facial expertise. PMID:26547249
Banno, Hayaki; Saiki, Jun
2015-03-01
Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.
A comparison of haptic material perception in blind and sighted individuals.
Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R
2015-10-01
We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-02-28
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
Rossion, Bruno; Jacques, Corentin; Jonas, Jacques
2018-02-26
The neural basis of face categorization has been widely investigated with functional magnetic resonance imaging (fMRI), identifying a set of face-selective local regions in the ventral occipitotemporal cortex (VOTC). However, indirect recording of neural activity with fMRI is associated with large fluctuations of signal across regions, often underestimating face-selective responses in the anterior VOTC. While direct recording of neural activity with subdural grids of electrodes (electrocorticography, ECoG) or depth electrodes (stereotactic electroencephalography, SEEG) offers a unique opportunity to fill this gap in knowledge, these studies rather reveal widely distributed face-selective responses. Moreover, intracranial recordings are complicated by interindividual variability in neuroanatomy, ambiguity in definition, and quantification of responses of interest, as well as limited access to sulci with ECoG. Here, we propose to combine SEEG in large samples of individuals with fast periodic visual stimulation to objectively define, quantify, and characterize face categorization across the whole VOTC. This approach reconciles the wide distribution of neural face categorization responses with their (right) hemispheric and regional specialization, and reveals several face-selective regions in anterior VOTC sulci. We outline the challenges of this research program to understand the neural basis of face categorization and high-level visual recognition in general. © 2018 New York Academy of Sciences.
The Role of Age and Executive Function in Auditory Category Learning
Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath
2015-01-01
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987
Prete, Giulia; Fabri, Mara; Foschi, Nicoletta; Tommasi, Luca
2016-12-17
We investigated hemispheric asymmetries in categorization of face gender by means of a divided visual field paradigm, in which female and male faces were presented unilaterally for 150ms each. A group of 60 healthy participants (30 males) and a male split-brain patient (D.D.C.) were asked to categorize the gender of the stimuli. Healthy participants categorized male faces presented in the right visual field (RVF) better and faster than when presented in the left visual field (LVF), and female faces presented in the LVF than in the RVF, independently of the participants' sex. Surprisingly, the recognition rates of D.D.C. were at chance levels - and significantly lower than those of the healthy participants - for both female and male faces presented in the RVF, as well as for female faces presented in the LVF. His performance was higher than expected by chance - and did not differ from controls - only for male faces presented in the LVF. The residual right-hemispheric ability of the split-brain patient in categorizing male faces reveals an own-gender bias lateralized in the right hemisphere, in line with the rightward own-identity and own-age bias previously shown in split-brain patients. The gender-contingent hemispheric dominance found in healthy participants confirms the previously shown right-hemispheric superiority in recognizing female faces, and also reveals a left-hemispheric superiority in recognizing male faces, adding an important evidence of hemispheric imbalance in the field of face and gender perception. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas
2011-01-01
How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432
ERP signs of categorical and supra-categorical processing of visual information.
Zani, Alberto; Marsili, Giulia; Senerchia, Annapaola; Orlandi, Andrea; Citron, Francesca M M; Rizzi, Ezia; Proverbio, Alice M
2015-01-01
The aim of the present study was to investigate to what extent shared and distinct brain mechanisms are possibly subserving the processing of visual supra-categorical and categorical knowledge as observed with event-related potentials of the brain. Access time to these knowledge types was also investigated. Picture pairs of animals, objects, and mixed types were presented. Participants were asked to decide whether each pair contained pictures belonging to the same category (either animals or man-made objects) or to different categories by pressing one of two buttons. Response accuracy and reaction times (RTs) were also recorded. Both ERPs and RTs were grand-averaged separately for the same-different supra-categories and the animal-object categories. Behavioral performance was faster for more endomorphic pairs, i.e., animals vs. objects and same vs. different category pairs. For ERPs, a modulation of the earliest C1 and subsequent P1 responses to the same vs. different supra-category pairs, but not to the animal vs. object category pairs, was found. This finding supports the view that early afferent processing in the striate cortex can be boosted as a by-product of attention allocated to the processing of shapes and basic features that are mismatched, but not to their semantic quintessence, during same-different supra-categorical judgment. Most importantly, the fact that this processing accrual occurred independent of a traditional experimental condition requiring selective attention to a stimulus source out of the various sources addressed makes it conceivable that this processing accrual may arise from the attentional demand deriving from the alternate focusing of visual attention within and across stimulus categorical pairs' basic structural features. Additional posterior ERP reflections of the brain more prominently processing animal category and same-category pairs were observed at the N1 and N2 levels, respectively, as well as at a late positive complex level, overall most likely related to different stages of analysis of the greater endomorphy of these shape groups. Conversely, an enhanced fronto-central and fronto-lateral N2 as well as a centro-parietal N400 to man-made objects and different-category pairs were found, possibly indexing processing of these entities' lower endomorphy and isomorphy at the basic features and semantic levels, respectively. Overall, the present ERP results revealed shared and distinct mechanisms of access to supra-categorical and categorical knowledge in the same way in which shared and distinct neural representations underlie the processing of diverse semantic categories. Additionally, they outlined the serial nature of categorical and supra-categorical representations, indicating the sequential steps of access to these separate knowledge types. Copyright © 2014 Elsevier B.V. All rights reserved.
Impact of feature saliency on visual category learning.
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.
Impact of feature saliency on visual category learning
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220
ERIC Educational Resources Information Center
Diesendruck, Gil; Peretz, Shimon
2013-01-01
Visual appearance is one of the main cues children rely on when categorizing novel objects. In 3 studies, testing 128 3-year-olds and 192 5-year-olds, we investigated how various kinds of information may differentially lead children to overlook visual appearance in their categorization decisions across domains. Participants saw novel animals or…
Rioux, Camille; Lafraire, Jérémie; Picard, Delphine
2018-01-01
The present research focuses on the effectiveness of visual exposure to vegetables in reducing food neophobia and pickiness among young children. We tested the hypotheses that (1) simple visual exposure to vegetables leads to an increase in the consumption of this food category, (2) diverse visual exposure to vegetables (i.e., vegetables varying in color are shown to children) leads to a greater increase in the consumption of this food category than classical exposure paradigms (i.e. the same mode of presentation of a given food across exposure sessions) and (3) visual exposure to vegetables leads to an increase in the consumption of this food category through a mediating effect of an increase in ease of categorization. We recruited 70 children aged 3-6 years who performed a 4-week study consisting of three phases: a 2-week visual exposure phase where place mats with pictures of vegetables were set on tables in school cafeterias, and pre and post intervention phases where willingness to try vegetables as well as cognitive performances were assessed for each child. Results indicated that visual exposure led to an increased consumption of exposed and non-exposed vegetables after the intervention period. Nevertheless, the exposure intervention where vegetables varying in color were shown to children was no more effective. Finally, results showed that an ease of categorization led to a larger impact after the exposure manipulation. The findings suggest that vegetable pictures might help parents to deal with some of the difficulties associated with the introduction of novel vegetables and furthermore that focusing on conceptual development could be an efficient way to tackle food neophobia and pickiness. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash
2017-10-01
Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.
Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash
2017-10-01
Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.
Differential effect of visual masking in perceptual categorization.
Hélie, Sébastien; Cousineau, Denis
2015-06-01
This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).
Post-decision biases reveal a self-consistency principle in perceptual inference.
Luu, Long; Stocker, Alan A
2018-05-15
Making a categorical judgment can systematically bias our subsequent perception of the world. We show that these biases are well explained by a self-consistent Bayesian observer whose perceptual inference process is causally conditioned on the preceding choice. We quantitatively validated the model and its key assumptions with a targeted set of three psychophysical experiments, focusing on a task sequence where subjects first had to make a categorical orientation judgment before estimating the actual orientation of a visual stimulus. Subjects exhibited a high degree of consistency between categorical judgment and estimate, which is difficult to reconcile with alternative models in the face of late, memory related noise. The observed bias patterns resemble the well-known changes in subjective preferences associated with cognitive dissonance, which suggests that the brain's inference processes may be governed by a universal self-consistency constraint that avoids entertaining 'dissonant' interpretations of the evidence. © 2018, Luu et al.
The effect of category learning on attentional modulation of visual cortex.
Folstein, Jonathan R; Fuller, Kelly; Howard, Dorothy; DePatie, Thomas
2017-09-01
Learning about visual object categories causes changes in the way we perceive those objects. One likely mechanism by which this occurs is the application of attention to potentially relevant objects. Here we test the hypothesis that category membership influences the allocation of attention, allowing attention to be applied not only to object features, but to entire categories. Participants briefly learned to categorize a set of novel cartoon animals after which EEG was recorded while participants distinguished between a target and non-target category. A second identical EEG session was conducted after two sessions of categorization practice. The category structure and task design allowed parametric manipulation of number of target features while holding feature frequency and category membership constant. We found no evidence that category membership influenced attentional selection: a postero-lateral negative component, labeled the selection negativity/N250, increased over time and was sensitive to number of target features, not target categories. In contrast, the right hemisphere N170 was not sensitive to target features. The P300 appeared sensitive to category in the first session, but showed a graded sensitivity to number of target features in the second session, possibly suggesting a transition from rule-based to similarity based categorization. Copyright © 2017. Published by Elsevier Ltd.
Brown, Angela M; Lindsey, Delwin T; Guckes, Kevin M
2011-01-01
The relation between colors and their names is a classic case-study for investigating the Sapir-Whorf hypothesis that categorical perception is imposed on perception by language. Here, we investigate the Sapir-Whorf prediction that visual search for a green target presented among blue distractors (or vice versa) should be faster than search for a green target presented among distractors of a different color of green (or for a blue target among different blue distractors). Gilbert, Regier, Kay & Ivry (2006) reported that this Sapir-Whorf effect is restricted to the right visual field (RVF), because the major brain language centers are in the left cerebral hemisphere. We found no categorical effect at the Green|Blue color boundary, and no categorical effect restricted to the RVF. Scaling of perceived color differences by Maximum Likelihood Difference Scaling (MLDS) also showed no categorical effect, including no effect specific to the RVF. Two models fit the data: a color difference model based on MLDS and a standard opponent-colors model of color discrimination based on the spectral sensitivities of the cones. Neither of these models, nor any of our data, suggested categorical perception of colors at the Green|Blue boundary, in either visual field. PMID:21980188
Drawing-to-Learn: A Framework for Using Drawings to Promote Model-Based Reasoning in Biology
Quillin, Kim; Thomas, Stephen
2015-01-01
The drawing of visual representations is important for learners and scientists alike, such as the drawing of models to enable visual model-based reasoning. Yet few biology instructors recognize drawing as a teachable science process skill, as reflected by its absence in the Vision and Change report’s Modeling and Simulation core competency. Further, the diffuse research on drawing can be difficult to access, synthesize, and apply to classroom practice. We have created a framework of drawing-to-learn that defines drawing, categorizes the reasons for using drawing in the biology classroom, and outlines a number of interventions that can help instructors create an environment conducive to student drawing in general and visual model-based reasoning in particular. The suggested interventions are organized to address elements of affect, visual literacy, and visual model-based reasoning, with specific examples cited for each. Further, a Blooming tool for drawing exercises is provided, as are suggestions to help instructors address possible barriers to implementing and assessing drawing-to-learn in the classroom. Overall, the goal of the framework is to increase the visibility of drawing as a skill in biology and to promote the research and implementation of best practices. PMID:25713094
ERIC Educational Resources Information Center
Roberson, Debi; Pak, Hyensou; Hanley, J. Richard
2008-01-01
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
Color Categories: Evidence for the Cultural Relativity Hypothesis
ERIC Educational Resources Information Center
Roberson, D.; Davidoff, J.; Davies, I.R.L.; Shapiro, L.R.
2005-01-01
The question of whether language affects our categorization of perceptual continua is of particular interest for the domain of color where constraints on categorization have been proposed both within the visual system and in the visual environment. Recent research (Roberson, Davies, & Davidoff, 2000; Roberson et al., in press) found…
The picture superiority effect in categorization: visual or semantic?
Job, R; Rumiati, R; Lotto, L
1992-09-01
Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.
Out of sight, out of mind: Categorization learning and normal aging.
Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris
2016-10-01
The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.
Categorically Defined Targets Trigger Spatiotemporal Visual Attention
ERIC Educational Resources Information Center
Wyble, Brad; Bowman, Howard; Potter, Mary C.
2009-01-01
Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…
Dorsal hippocampus is necessary for visual categorization in rats.
Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H
2018-02-23
The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Harel, Assaf; Bentin, Shlomo
2009-01-01
The type of visual information needed for categorizing faces and nonface objects was investigated by manipulating spatial frequency scales available in the image during a category verification task addressing basic and subordinate levels. Spatial filtering had opposite effects on faces and airplanes that were modulated by categorization level. The…
The visual attention span deficit in dyslexia is visual and not verbal.
Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane
2012-06-01
The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.
Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred
2014-01-01
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678
On the adequacy of current empirical evaluations of formal models of categorization.
Wills, Andy J; Pothos, Emmanuel M
2012-01-01
Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts. Progress in assessing the relative adequacy of formal categorization models has, to date, been limited because (a) formal model comparisons are narrow in the number of models and phenomena considered and (b) models do not often clearly define their explanatory scope. Progress is further hampered by the practice of fitting models with arbitrarily variable parameters to each data set independently. Reviewing examples of good practice in the literature, we conclude that model comparisons are most fruitful when relative adequacy is assessed by comparing well-defined models on the basis of the number and proportion of irreversible, ordinal, penetrable successes (principles of minimal flexibility, breadth, good-enough precision, maximal simplicity, and psychological focus).
Visual Categorization of Natural Movies by Rats
Vinken, Kasper; Vermaercke, Ben
2014-01-01
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598
Roldan, Stephanie M
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Roldan, Stephanie M.
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538
Saneyoshi, Ayako; Michimata, Chikashi
2009-12-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.
V K, Shinoj; Hong, Xun Jie Jeesmond; V M, Murukeshan; M, Baskaran; Tin, Aung
2016-12-01
The visualization capabilities of various ocular imaging instruments can generally be categorized into photographic (e.g. gonioscopy, Pentacam, RetCam) and optical tomographic (e.g. optical coherence tomography (OCT), photoacoustic (PA) imaging, ultrasound biomicriscopy (UBM)) methods. These imaging instruments allow vision researchers and clinicians to visualize the iridocorneal angle, and are essential in the diagnosis and management of glaucoma. Each of these imaging modalities has particular benefits and associated drawbacks in obtaining repeatable and reliable measurement in the evaluation of the angle. This review article in this context summarized recent progresses in anterior chamber imaging techniques in glaucoma diagnosis and follow-up procedures. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.
2013-01-01
Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
Visual categorization of natural movies by rats.
Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P
2014-08-06
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.
Games people play: How video games improve probabilistic learning.
Schenk, Sabrina; Lech, Robert K; Suchan, Boris
2017-09-29
Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
Semantic Categorization Precedes Affective Evaluation of Visual Scenes
ERIC Educational Resources Information Center
Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.
2010-01-01
We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…
Retter, Talia L; Jiang, Fang; Webster, Michael A; Rossion, Bruno
2018-04-01
Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100-ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization. Copyright © 2018 Elsevier Ltd. All rights reserved.
The dynamics of categorization: Unraveling rapid categorization.
Mack, Michael L; Palmeri, Thomas J
2015-06-01
We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30 ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of categorization, yet no previous study has investigated them together. We systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. However, these advantages were modulated by category trial context. With randomized target categories, the superordinate advantage was eliminated; and with only four repetitions of superordinate categorization within an otherwise randomized context, the basic-level advantage was eliminated. Contrary to theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context. (c) 2015 APA, all rights reserved).
THE DYNAMICS OF CATEGORIZATION: UNRAVELING RAPID CATEGORIZATION
Mack, Michael L.; Palmeri, Thomas J.
2015-01-01
We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of object categorization, yet no study has previously investigated them together. Over five experiments, we systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. But this superordinate advantage was modulated significantly by target category trial context. With randomized target categories, the superordinate advantage was eliminated; and with “blocks” of only four repetitions of superordinate categorization within an otherwise randomized context, the advantage for the basic-level was eliminated. Contrary to some theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context. PMID:25938178
Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice
2012-07-01
Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Mur, Marieke
2016-01-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. PMID:26493748
Jozwik, Kamila M; Kriegeskorte, Nikolaus; Mur, Marieke
2016-03-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", and "animal"). The feature-based model includes both object parts (such as "eye", "tail", and "handle") and other descriptive features (such as "circular", "green", and "stubbly"). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pragmatic precision oncology: the secondary uses of clinical tumor molecular profiling
Thota, Ramya; Staggs, David B; Johnson, Douglas B; Warner, Jeremy L
2016-01-01
Background Precision oncology increasingly utilizes molecular profiling of tumors to determine treatment decisions with targeted therapeutics. The molecular profiling data is valuable in the treatment of individual patients as well as for multiple secondary uses. Objective To automatically parse, categorize, and aggregate clinical molecular profile data generated during cancer care as well as use this data to address multiple secondary use cases. Methods A system to parse, categorize and aggregate molecular profile data was created. A naÿve Bayesian classifier categorized results according to clinical groups. The accuracy of these systems were validated against a published expertly-curated subset of molecular profiling data. Results Following one year of operation, 819 samples have been accurately parsed and categorized to generate a data repository of 10,620 genetic variants. The database has been used for operational, clinical trial, and discovery science research. Conclusions A real-time database of molecular profiling data is a pragmatic solution to several knowledge management problems in the practice and science of precision oncology. PMID:27026612
Monroe, Scott; Cai, Li
2015-01-01
This research is concerned with two topics in assessing model fit for categorical data analysis. The first topic involves the application of a limited-information overall test, introduced in the item response theory literature, to structural equation modeling (SEM) of categorical outcome variables. Most popular SEM test statistics assess how well the model reproduces estimated polychoric correlations. In contrast, limited-information test statistics assess how well the underlying categorical data are reproduced. Here, the recently introduced C2 statistic of Cai and Monroe (2014) is applied. The second topic concerns how the root mean square error of approximation (RMSEA) fit index can be affected by the number of categories in the outcome variable. This relationship creates challenges for interpreting RMSEA. While the two topics initially appear unrelated, they may conveniently be studied in tandem since RMSEA is based on an overall test statistic, such as C2. The results are illustrated with an empirical application to data from a large-scale educational survey.
Peykarjou, Stefanie; Hoehl, Stefanie; Pauen, Sabina; Rossion, Bruno
2017-10-02
This study investigates categorization of human and ape faces in 9-month-olds using a Fast Periodic Visual Stimulation (FPVS) paradigm while measuring EEG. Categorization responses are elicited only if infants discriminate between different categories and generalize across exemplars within each category. In study 1, human or ape faces were presented as standard and deviant stimuli in upright and inverted trials. Upright ape faces presented among humans elicited strong categorization responses, whereas responses for upright human faces and for inverted ape faces were smaller. Deviant inverted human faces did not elicit categorization. Data were best explained by a model with main effects of species and orientation. However, variance of low-level image characteristics was higher for the ape than the human category. Variance was matched to replicate this finding in an independent sample (study 2). Both human and ape faces elicited categorization in upright and inverted conditions, but upright ape faces elicited the strongest responses. Again, data were best explained by a model of two main effects. These experiments demonstrate that 9-month-olds rapidly categorize faces, and unfamiliar faces presented among human faces elicit increased categorization responses. This likely reflects habituation for the familiar standard category, and stronger release for the unfamiliar category deviants.
Lança, Carla
2013-09-01
Screening programs to detect visual abnormalities in children vary among countries. The aim of this study is to describe experts' perception of best practice guidelines and competency framework for visual screening in children. A qualitative focus group technique was applied during the Portuguese national orthoptic congress to obtain the perception of an expert panel of 5 orthoptists and 2 ophthalmologists with experience in visual screening for children (mean age 53.43 years, SD ± 9.40). The panel received in advance a script with the description of three tuning competencies dimensions (instrumental, systemic, and interpersonal) for visual screening. The session was recorded in video and audio. Qualitative data were analyzed using a categorical technique. According to experts' views, six tests (35.29%) have to be included in a visual screening: distance visual acuity test, cover test, bi-prism or 4/6(Δ) prism, fusion, ocular movements, and refraction. Screening should be performed according to the child age before and after 3 years of age (17.65%). The expert panel highlighted the influence of the professional experience in the application of a screening protocol (23.53%). They also showed concern about the false negatives control (23.53%). Instrumental competencies were the most cited (54.09%), followed by interpersonal (29.51%) and systemic (16.4%). Orthoptists should have professional experience before starting to apply a screening protocol. False negative results are a concern that has to be more thoroughly investigated. The proposed framework focuses on core competencies highlighted by the expert panel. Competencies programs could be important do develop better screening programs.
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
2016-01-01
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants' VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition.
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
2016-01-01
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants’ VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition. PMID:26950210
Category learning increases discriminability of relevant object dimensions in visual cortex.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2013-04-01
Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.
A conflict-based model of color categorical perception: evidence from a priming study.
Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi
2014-10-01
Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.
ERIC Educational Resources Information Center
Yamazaki, Y.; Aust, U.; Huber, L.; Hausmann, M.; Gunturkun, O.
2007-01-01
This study was aimed at revealing which cognitive processes are lateralized in visual categorizations of "humans" by pigeons. To this end, pigeons were trained to categorize pictures of humans and then tested binocularly or monocularly (left or right eye) on the learned categorization and for transfer to novel exemplars (Experiment 1). Subsequent…
Schmidt, K; Forkmann, K; Sinke, C; Gratz, M; Bitz, A; Bingel, U
2016-07-01
Compared to peripheral pain, trigeminal pain elicits higher levels of fear, which is assumed to enhance the interruptive effects of pain on concomitant cognitive processes. In this fMRI study we examined the behavioral and neural effects of trigeminal (forehead) and peripheral (hand) pain on visual processing and memory encoding. Cerebral activity was measured in 23 healthy subjects performing a visual categorization task that was immediately followed by a surprise recognition task. During the categorization task subjects received concomitant noxious electrical stimulation on the forehead or hand. Our data show that fear ratings were significantly higher for trigeminal pain. Categorization and recognition performance did not differ between pictures that were presented with trigeminal and peripheral pain. However, object categorization in the presence of trigeminal pain was associated with stronger activity in task-relevant visual areas (lateral occipital complex, LOC), memory encoding areas (hippocampus and parahippocampus) and areas implicated in emotional processing (amygdala) compared to peripheral pain. Further, individual differences in neural activation between the trigeminal and the peripheral condition were positively related to differences in fear ratings between both conditions. Functional connectivity between amygdala and LOC was increased during trigeminal compared to peripheral painful stimulation. Fear-driven compensatory resource activation seems to be enhanced for trigeminal stimuli, presumably due to their exceptional biological relevance. Copyright © 2016 Elsevier Inc. All rights reserved.
Vesker, Michael; Bahn, Daniela; Kauschke, Christina; Tschense, Monika; Degé, Franziska; Schwarzer, Gudrun
2018-01-01
In order to assess how the perception of audible speech and facial expressions influence one another for the perception of emotions, and how this influence might change over the course of development, we conducted two cross-modal priming experiments with three age groups of children (6-, 9-, and 12-years old), as well as college-aged adults. In Experiment 1, 74 children and 24 adult participants were tasked with categorizing photographs of emotional faces as positive or negative as quickly as possible after being primed with emotion words presented via audio in valence-congruent and valence-incongruent trials. In Experiment 2, 67 children and 24 adult participants carried out a similar categorization task, but with faces acting as visual primes, and emotion words acting as auditory targets. The results of Experiment 1 showed that participants made more errors when categorizing positive faces primed by negative words versus positive words, and that 6-year-old children are particularly sensitive to positive word primes, giving faster correct responses regardless of target valence. Meanwhile, the results of Experiment 2 did not show any congruency effects for priming by facial expressions. Thus, audible emotion words seem to exert an influence on the emotional categorization of faces, while faces do not seem to influence the categorization of emotion words in a significant way.
Overcoming default categorical bias in spatial memory.
Sampaio, Cristina; Wang, Ranxiao Frances
2010-12-01
In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.
Xiao, Youping; Kavanau, Christopher; Bertin, Lauren; Kaplan, Ehud
2011-01-01
Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of "warm" and "cool". This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L- versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between a universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.
Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan
2015-01-16
We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.
ERIC Educational Resources Information Center
Roberts, Kenneth
1997-01-01
Infants (N=24) with history of otitis media and tube placement were tested for categorical responding within a visual familiarization-discrimination model. Findings suggest that even mild hearing loss may adversely affect categorical responding under specific input conditions, which may persist after normal hearing is restored, possibly because…
Visual laterality in dolphins: importance of the familiarity of stimuli
2012-01-01
Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system and migratory behaviour. PMID:22239860
Visual laterality in dolphins: importance of the familiarity of stimuli.
Blois-Heulin, Catherine; Crével, Mélodie; Böye, Martin; Lemasson, Alban
2012-01-12
Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system and migratory behaviour.
Werner, Sebastian; Noppeney, Uta
2010-02-17
Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.
Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2015-11-01
The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.
Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán
2005-10-01
Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.
Direct versus indirect processing changes the influence of color in natural scene categorization.
Otsuka, Sachio; Kawaguchi, Jun
2009-10-01
We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.
Lewellen, Mary Jo; Goldinger, Stephen D.; Pisoni, David B.; Greene, Beth G.
2012-01-01
College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed. PMID:8371087
Active sensing in the categorization of visual patterns
Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M
2016-01-01
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.
Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng
2018-05-01
Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.
Can invertebrates see the e-vector of polarization as a separate modality of light?
Labhart, Thomas
2016-12-15
The visual world is rich in linearly polarized light stimuli, which are hidden from the human eye. But many invertebrate species make use of polarized light as a source of valuable visual information. However, exploiting light polarization does not necessarily imply that the electric (e)-vector orientation of polarized light can be perceived as a separate modality of light. In this Review, I address the question of whether invertebrates can detect specific e-vector orientations in a manner similar to that of humans perceiving spectral stimuli as specific hues. To analyze e-vector orientation, the signals of at least three polarization-sensitive sensors (analyzer channels) with different e-vector tuning axes must be compared. The object-based, imaging polarization vision systems of cephalopods and crustaceans, as well as the water-surface detectors of flying backswimmers, use just two analyzer channels. Although this excludes the perception of specific e-vector orientations, a two-channel system does provide a coarse, categoric analysis of polarized light stimuli, comparable to the limited color sense of dichromatic, 'color-blind' humans. The celestial compass of insects employs three or more analyzer channels. However, that compass is multimodal, i.e. e-vector information merges with directional information from other celestial cues, such as the solar azimuth and the spectral gradient in the sky, masking e-vector information. It seems that invertebrate organisms take no interest in the polarization details of visual stimuli, but polarization vision grants more practical benefits, such as improved object detection and visual communication for cephalopods and crustaceans, compass readings to traveling insects, or the alert 'water below!' to water-seeking bugs. © 2016. Published by The Company of Biologists Ltd.
Can invertebrates see the e-vector of polarization as a separate modality of light?
2016-01-01
ABSTRACT The visual world is rich in linearly polarized light stimuli, which are hidden from the human eye. But many invertebrate species make use of polarized light as a source of valuable visual information. However, exploiting light polarization does not necessarily imply that the electric (e)-vector orientation of polarized light can be perceived as a separate modality of light. In this Review, I address the question of whether invertebrates can detect specific e-vector orientations in a manner similar to that of humans perceiving spectral stimuli as specific hues. To analyze e-vector orientation, the signals of at least three polarization-sensitive sensors (analyzer channels) with different e-vector tuning axes must be compared. The object-based, imaging polarization vision systems of cephalopods and crustaceans, as well as the water-surface detectors of flying backswimmers, use just two analyzer channels. Although this excludes the perception of specific e-vector orientations, a two-channel system does provide a coarse, categoric analysis of polarized light stimuli, comparable to the limited color sense of dichromatic, ‘color-blind’ humans. The celestial compass of insects employs three or more analyzer channels. However, that compass is multimodal, i.e. e-vector information merges with directional information from other celestial cues, such as the solar azimuth and the spectral gradient in the sky, masking e-vector information. It seems that invertebrate organisms take no interest in the polarization details of visual stimuli, but polarization vision grants more practical benefits, such as improved object detection and visual communication for cephalopods and crustaceans, compass readings to traveling insects, or the alert ‘water below!’ to water-seeking bugs. PMID:27974532
Visual search and autism symptoms: What young children search for and co-occurring ADHD matter.
Doherty, Brianna R; Charman, Tony; Johnson, Mark H; Scerif, Gaia; Gliga, Teodora
2018-05-03
Superior visual search is one of the most common findings in the autism spectrum disorder (ASD) literature. Here, we ascertain how generalizable these findings are across task and participant characteristics, in light of recent replication failures. We tested 106 3-year-old children at familial risk for ASD, a sample that presents high ASD and ADHD symptoms, and 25 control participants, in three multi-target search conditions: easy exemplar search (look for cats amongst artefacts), difficult exemplar search (look for dogs amongst chairs/tables perceptually similar to dogs), and categorical search (look for animals amongst artefacts). Performance was related to dimensional measures of ASD and ADHD, in agreement with current research domain criteria (RDoC). We found that ASD symptom severity did not associate with enhanced performance in search, but did associate with poorer categorical search in particular, consistent with literature describing impairments in categorical knowledge in ASD. Furthermore, ASD and ADHD symptoms were both associated with more disorganized search paths across all conditions. Thus, ASD traits do not always convey an advantage in visual search; on the contrary, ASD traits may be associated with difficulties in search depending upon the nature of the stimuli (e.g., exemplar vs. categorical search) and the presence of co-occurring symptoms. © 2018 John Wiley & Sons Ltd.
Furl, N; van Rijsbergen, N J; Treves, A; Dolan, R J
2007-08-01
Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.
Edge co-occurrences can account for rapid categorization of natural versus animal images
NASA Astrophysics Data System (ADS)
Perrinet, Laurent U.; Bednar, James A.
2015-06-01
Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.
Drawing-to-learn: a framework for using drawings to promote model-based reasoning in biology.
Quillin, Kim; Thomas, Stephen
2015-03-02
The drawing of visual representations is important for learners and scientists alike, such as the drawing of models to enable visual model-based reasoning. Yet few biology instructors recognize drawing as a teachable science process skill, as reflected by its absence in the Vision and Change report's Modeling and Simulation core competency. Further, the diffuse research on drawing can be difficult to access, synthesize, and apply to classroom practice. We have created a framework of drawing-to-learn that defines drawing, categorizes the reasons for using drawing in the biology classroom, and outlines a number of interventions that can help instructors create an environment conducive to student drawing in general and visual model-based reasoning in particular. The suggested interventions are organized to address elements of affect, visual literacy, and visual model-based reasoning, with specific examples cited for each. Further, a Blooming tool for drawing exercises is provided, as are suggestions to help instructors address possible barriers to implementing and assessing drawing-to-learn in the classroom. Overall, the goal of the framework is to increase the visibility of drawing as a skill in biology and to promote the research and implementation of best practices. © 2015 K. Quillin and S. Thomas. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Self-supervised online metric learning with low rank constraint for scene categorization.
Cong, Yang; Liu, Ji; Yuan, Junsong; Luo, Jiebo
2013-08-01
Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.
CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset
Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738
Pragmatic precision oncology: the secondary uses of clinical tumor molecular profiling.
Rioth, Matthew J; Thota, Ramya; Staggs, David B; Johnson, Douglas B; Warner, Jeremy L
2016-07-01
Precision oncology increasingly utilizes molecular profiling of tumors to determine treatment decisions with targeted therapeutics. The molecular profiling data is valuable in the treatment of individual patients as well as for multiple secondary uses. To automatically parse, categorize, and aggregate clinical molecular profile data generated during cancer care as well as use this data to address multiple secondary use cases. A system to parse, categorize and aggregate molecular profile data was created. A naÿve Bayesian classifier categorized results according to clinical groups. The accuracy of these systems were validated against a published expertly-curated subset of molecular profiling data. Following one year of operation, 819 samples have been accurately parsed and categorized to generate a data repository of 10,620 genetic variants. The database has been used for operational, clinical trial, and discovery science research. A real-time database of molecular profiling data is a pragmatic solution to several knowledge management problems in the practice and science of precision oncology. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An enquiry into the process of categorization of pictures and words.
Viswanathan, Madhubalan; Childers, Terry L
2003-02-01
This paper reports a series of experiments conducted to study the categorization of pictures and words. Whereas some studies reported in the past have found a picture advantage in categorization, other studies have yielded no differences between pictures and words. This paper used an experimental paradigm designed to overcome some methodological problems to examine picture-word categorization. The results of one experiment were consistent with an advantage for pictures in categorization. To identify the source of the picture advantage in categorization, two more experiments were conducted. Findings suggest that semantic relatedness may play an important role in the categorization of both pictures and words. We explain these findings by suggesting that pictures simultaneously access both their concept and visually salient features whereas words may initially access their concept and may subsequently activate features. Therefore, pictures have an advantage in categorization by offering multiple routes to semantic processing.
Are Categorical Spatial Relations Encoded by Shifting Visual Attention between Objects?
ERIC Educational Resources Information Center
Yuan, Lei; Uttal, David; Franconeri, Steven
2016-01-01
Perceiving not just values, but relations between values, is critical to human cognition. We tested the predictions of a proposed mechanism for processing categorical spatial relations between two objects--the "shift account" of relation processing--which states that relations such as "above" or "below" are extracted…
Neurophysiological Evidence for Categorical Perception of Color
ERIC Educational Resources Information Center
Holmes, Amanda; Franklin, Anna; Clifford, Alexandra; Davies, Ian
2009-01-01
The aim of this investigation was to examine the time course and the relative contributions of perceptual and post-perceptual processes to categorical perception (CP) of color. A visual oddball task was used with standard and deviant stimuli from same (within-category) or different (between-category) categories, with chromatic separations for…
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Conscious control over the content of unconscious cognition.
Kunde, Wilfried; Kiesel, Andrea; Hoffmann, Joachim
2003-06-01
Visual stimuli (primes) presented too briefly to be consciously identified can nevertheless affect responses to subsequent stimuli - an instance of unconscious cognition. There is a lively debate as to whether such priming effects originate from unconscious semantic processing of the primes or from reactivation of learned motor responses that conscious stimuli afford during preceding practice. In four experiments we demonstrate that unconscious stimuli owe their impact neither to automatic semantic categorization nor to memory traces of preceding stimulus-response episodes, but to their match with pre-specified cognitive action-trigger conditions. The intentional creation of such triggers allows actors to control the way unconscious stimuli bias their behaviour.
Search guidance is proportional to the categorical specificity of a target cue.
Schmidt, Joseph; Zelinsky, Gregory J
2009-10-01
Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Common angle plots as perception-true visualizations of categorical associations.
Hofmann, Heike; Vendettuoli, Marie
2013-12-01
Visualizations are great tools of communications-they summarize findings and quickly convey main messages to our audience. As designers of charts we have to make sure that information is shown with a minimum of distortion. We have to also consider illusions and other perceptual limitations of our audience. In this paper we discuss the effect and strength of the line width illusion, a Muller-Lyer type illusion, on designs related to displaying associations between categorical variables. Parallel sets and hammock plots are both affected by line width illusions. We introduce the common-angle plot as an alternative method for displaying categorical data in a manner that minimizes the effect from perceptual illusions. Results from user studies both highlight the need for addressing line-width illusions in displays and provide evidence that common angle charts successfully resolve this issue.
Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai
2011-01-01
Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Council for Exceptional Children: Standards for Evidence-Based Practices in Special Education
ERIC Educational Resources Information Center
TEACHING Exceptional Children, 2014
2014-01-01
In this article, the "Council for Exceptional Children (CEC)" presents Standards for Evidence-Based Practices in Special Education. The statement presents an approach for categorizing the evidence base of practices in special education. The quality indicators and the criteria for categorizing the evidence base of special education…
Kreeft, Davey; Arkenbout, Ewout Aart; Henselmans, Paulus Wilhelmus Johannes; van Furth, Wouter R.; Breedveld, Paul
2017-01-01
A clear visualization of the operative field is of critical importance in endoscopic surgery. During surgery the endoscope lens can get fouled by body fluids (eg, blood), ground substance, rinsing fluid, bone dust, or smoke plumes, resulting in visual impairment. As a result, surgeons spend part of the procedure on intermittent cleaning of the endoscope lens. Current cleaning methods that rely on manual wiping or a lens irrigation system are still far from ideal, leading to longer procedure times, dirtying of the surgical site, and reduced visual acuity, potentially reducing patient safety. With the goal of finding a solution to these issues, a literature review was conducted to identify and categorize existing techniques capable of achieving optically clean surfaces, and to show which techniques can potentially be implemented in surgical practice. The review found that the most promising method for achieving surface cleanliness consists of a hybrid solution, namely, that of a hydrophilic or hydrophobic coating on the endoscope lens and the use of the existing lens irrigation system. PMID:28511635
AutoBD: Automated Bi-Level Description for Scalable Fine-Grained Visual Categorization.
Yao, Hantao; Zhang, Shiliang; Yan, Chenggang; Zhang, Yongdong; Li, Jintao; Tian, Qi
Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.
Semantic processing and response inhibition.
Chiang, Hsueh-Sheng; Motes, Michael A; Mudar, Raksha A; Rao, Neena K; Mansinghani, Sethesh; Brier, Matthew R; Maguire, Mandy J; Kraut, Michael A; Hart, John
2013-11-13
The present study examined functional MRI (fMRI) BOLD signal changes in response to object categorization during response selection and inhibition. Young adults (N=16) completed a Go/NoGo task with varying object categorization requirements while fMRI data were recorded. Response inhibition elicited increased signal change in various brain regions, including medial frontal areas, compared with response selection. BOLD signal in an area within the right angular gyrus was increased when higher-order categorization was mandated. In addition, signal change during response inhibition varied with categorization requirements in the left inferior temporal gyrus (lIT). lIT-mediated response inhibition when inhibiting the response only required lower-order categorization, but lIT mediated both response selection and inhibition when selecting and inhibiting the response required higher-order categorization. The findings characterized mechanisms mediating response inhibition associated with semantic object categorization in the 'what' visual object memory system.
Effects of Stimulus Duration and Choice Delay on Visual Categorization in Pigeons
ERIC Educational Resources Information Center
Lazareva, Olga F.; Wasserman, Edward A.
2009-01-01
We [Lazareva, O. F., Freiburger, K. L., & Wasserman, E. A. (2004). "Pigeons concurrently categorize photographs at both basic and superordinate levels." "Psychonomic Bulletin and Review," 11, 1111-1117] previously trained four pigeons to classify color photographs into their basic-level categories (cars, chairs, flowers, or people) or into their…
Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG.
Costanzo, Michelle E; McArdle, Joseph J; Swett, Bruce; Nechaev, Vladimir; Kemeny, Stefan; Xu, Jiang; Braun, Allen R
2013-01-01
The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization-the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization-specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items.
ERIC Educational Resources Information Center
Janko, Tomáš; Knecht, Petr
2013-01-01
The article focuses on the issue of visuals in geography textbooks and their assessment. The issue is explained from the perspective of educational and cognitive psychological theory and the theory of textbooks. The general purpose of the presented research is to develop a research instrument for categorising the types of visuals in geography…
ERIC Educational Resources Information Center
Kisamore, April N.; Carr, James E.; LeBlanc, Linda A.
2011-01-01
It has been suggested that verbally sophisticated individuals engage in a series of precurrent behaviors (e.g., covert intraverbal behavior, grouping stimuli, visual imagining) to solve problems such as answering questions (Palmer, 1991; Skinner, 1953). We examined the effects of one problem solving strategy--visual imagining--on increasing…
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Residual perception of biological motion in cortical blindness.
Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto
2016-12-01
From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Emerging Object Representations in the Visual System Predict Reaction Times for Categorization
Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.
2015-01-01
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634
Accuracy and speed of material categorization in real-world images.
Sharan, Lavanya; Rosenholtz, Ruth; Adelson, Edward H
2014-08-13
It is easy to visually distinguish a ceramic knife from one made of steel, a leather jacket from one made of denim, and a plush toy from one made of plastic. Most studies of material appearance have focused on the estimation of specific material properties such as albedo or surface gloss, and as a consequence, almost nothing is known about how we recognize material categories like leather or plastic. We have studied judgments of high-level material categories with a diverse set of real-world photographs, and we have shown (Sharan, 2009) that observers can categorize materials reliably and quickly. Performance on our tasks cannot be explained by simple differences in color, surface shape, or texture. Nor can the results be explained by observers merely performing shape-based object recognition. Rather, we argue that fast and accurate material categorization is a distinct, basic ability of the visual system. © 2014 ARVO.
Accuracy and speed of material categorization in real-world images
Sharan, Lavanya; Rosenholtz, Ruth; Adelson, Edward H.
2014-01-01
It is easy to visually distinguish a ceramic knife from one made of steel, a leather jacket from one made of denim, and a plush toy from one made of plastic. Most studies of material appearance have focused on the estimation of specific material properties such as albedo or surface gloss, and as a consequence, almost nothing is known about how we recognize material categories like leather or plastic. We have studied judgments of high-level material categories with a diverse set of real-world photographs, and we have shown (Sharan, 2009) that observers can categorize materials reliably and quickly. Performance on our tasks cannot be explained by simple differences in color, surface shape, or texture. Nor can the results be explained by observers merely performing shape-based object recognition. Rather, we argue that fast and accurate material categorization is a distinct, basic ability of the visual system. PMID:25122216
Wysham, Nicholas G.; Miriovsky, Benjamin J.; Currow, David C.; Herndon, James E.; Samsa, Gregory P.; Wilcock, Andrew; Abernethy, Amy P.
2016-01-01
Context Measurement of dyspnea is important for clinical care and research. Objectives To characterize the relationship between the 0–10 Numerical Rating Scale (NRS) and four-level categorical Verbal Descriptor Scale (VDS) for dyspnea assessment. Methods This was a substudy of a double-blind randomized controlled trial comparing palliative oxygen to room air for relief of refractory breathlessness in patients with life-limiting illness. Dyspnea was assessed with both a 0–10 NRS and a four-level categorical VDS over the one-week trial. NRS and VDS responses were analyzed in cross section and longitudinally. Relationships between NRS and VDS responses were portrayed using descriptive statistics and visual representations. Results Two hundred twenty-six participants contributed responses. At baseline, mild and moderate levels of breathlessness were reported by 41.9% and 44.6% of participants, respectively. NRS scores demonstrated increasing mean and median levels for increasing VDS intensity, from a mean (SD) of 0.6 (±1.04) for VDS none category to 8.2 (1.4) for VDS severe category. The Spearman correlation coefficient was strong at 0.78 (P < 0.0001). Based on the distribution of NRS scores within VDS categories, we calculated test characteristics of two different cutpoint models. Both models yielded 75% correct translations from NRS to VDS; however, Model A was more sensitive for moderate or greater dyspnea, with fewer misses downcoded. Conclusion There is strong correlation between VDS and NRS measures for dyspnea. Proposed practical cutpoints for the relationship between the dyspnea VDS and NRS are 0 for none, 1–4 for mild, 5–8 for moderate, and 9–10 for severe. PMID:26004401
ERIC Educational Resources Information Center
Quinn, Paul C.; Schyns, Philippe G.; Goldstone, Robert L.
2006-01-01
The relation between perceptual organization and categorization processes in 3- and 4-month-olds was explored. The question was whether an invariant part abstracted during category learning could interfere with Gestalt organizational processes. A 2003 study by Quinn and Schyns had reported that an initial category familiarization experience in…
Specific-Token Effects in Screening Tasks: Possible Implications for Aviation Security
ERIC Educational Resources Information Center
Smith, J. David; Redford, Joshua S.; Washburn, David A.; Taglialatela, Lauren A.
2005-01-01
Screeners at airport security checkpoints perform an important categorization task in which they search for threat items in complex x-ray images. But little is known about how the processes of categorization stand up to visual complexity. The authors filled this research gap with screening tasks in which participants searched for members of target…
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
ERIC Educational Resources Information Center
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2011-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: (1) the categorical relationship between the target and the distracters and (2) the visual field in which the target was presented. Similar to controls, the RH patients…
Spot-checks to measure general hygiene practice.
Sonego, Ina L; Mosler, Hans-Joachim
2016-01-01
A variety of hygiene behaviors are fundamental to the prevention of diarrhea. We used spot-checks in a survey of 761 households in Burundi to examine whether something we could call general hygiene practice is responsible for more specific hygiene behaviors, ranging from handwashing to sweeping the floor. Using structural equation modeling, we showed that clusters of hygiene behavior, such as primary caregivers' cleanliness and household cleanliness, explained the spot-check findings well. Within our model, general hygiene practice as overall concept explained the more specific clusters of hygiene behavior well. Furthermore, the higher general hygiene practice, the more likely children were to be categorized healthy (r = 0.46). General hygiene practice was correlated with commitment to hygiene (r = 0.52), indicating a strong association to psychosocial determinants. The results show that different hygiene behaviors co-occur regularly. Using spot-checks, the general hygiene practice of a household can be rated quickly and easily.
Chen, Yi-Chuan; Spence, Charles
2018-04-30
We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Enhanced HMAX model with feedforward feature learning for multiclass categorization.
Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu
2015-01-01
In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.
Mostaghimi, Arash; Wanat, Karolyn; Crotty, Bradley H; Rosenbach, Misha
2015-10-16
In response to a perceived erosion of medical dermatology, combined internal medicine and dermatology programs (med/derm) programs have been developed that aim to train dermatologists who take care of medically complex patients. Despite the investment in these programs, there is currently no data with regards to the potential impact of these trainees on the dermatology workforce. To determine the experiences, motivations, and future plans of residents in combined med/derm residency programs. We surveyed residents at all United States institutions with both categorical and combined training programs in spring of 2012. Respondents used visual analog scales to rate clinical interests, self-assessed competency, career plans, and challenges. The primary study outcomes were comfort in taking care of patients with complex disease, future practice plans, and experience during residency. Twenty-eight of 31 med/derm residents (87.5%) and 28 of 91 (31%) categorical residents responded (overall response rate 46%). No significant differences were seen in self-assessed dermatology competency, or comfort in performing inpatient consultations, cosmetic procedures, or prescribing systemic agents. A trend toward less comfort in general dermatology was seen among med/derm residents. Med/derm residents were more likely to indicate career preferences for performing inpatient consultation and taking care of medically complex patients. Categorical residents rated their programs and experiences more highly. Med/derm residents have stronger interests in serving medically complex patients. Categorical residents are more likely to have a positive experience during residency. Future work will be needed to ascertain career choices among graduates once data are available.
Transfer learning for visual categorization: a survey.
Shao, Ling; Zhu, Fan; Li, Xuelong
2015-05-01
Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition.
Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2016-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548
Pitch perception deficits in nonverbal learning disability.
Fernández-Prieto, I; Caprile, C; Tinoco-González, D; Ristol-Orriols, B; López-Sala, A; Póo-Argüelles, P; Pons, F; Navarra, J
2016-12-01
The nonverbal learning disability (NLD) is a neurological dysfunction that affects cognitive functions predominantly related to the right hemisphere such as spatial and abstract reasoning. Previous evidence in healthy adults suggests that acoustic pitch (i.e., the relative difference in frequency between sounds) is, under certain conditions, encoded in specific areas of the right hemisphere that also encode the spatial elevation of external objects (e.g., high vs. low position). Taking this evidence into account, we explored the perception of pitch in preadolescents and adolescents with NLD and in a group of healthy participants matched by age, gender, musical knowledge and handedness. Participants performed four speeded tests: a stimulus detection test and three perceptual categorization tests based on colour, spatial position and pitch. Results revealed that both groups were equally fast at detecting visual targets and categorizing visual stimuli according to their colour. In contrast, the NLD group showed slower responses than the control group when categorizing space (direction of a visual object) and pitch (direction of a change in sound frequency). This pattern of results suggests the presence of a subtle deficit at judging pitch in NLD along with the traditionally-described difficulties in spatial processing. Copyright © 2016. Published by Elsevier Ltd.
Hue distinctiveness overrides category in determining performance in multiple object tracking.
Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming
2018-02-01
The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.
Categorical working memory representations are used in delayed estimation of continuous colors.
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2017-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Neural Basis of Contextual Influences on Face Categorization
Freeman, Jonathan B.; Ma, Yina; Barth, Maria; Young, Steven G.; Han, Shihui; Ambady, Nalini
2015-01-01
From only brief exposure to a face, individuals spontaneously categorize another's race. Recent behavioral evidence suggests that visual context may affect such categorizations. We used fMRI to examine the neural basis of contextual influences on the race categorization of faces. Participants categorized the race of faces that varied along a White-Asian morph continuum and were surrounded by American, neutral, or Chinese scene contexts. As expected, the context systematically influenced categorization responses and their efficiency (response times). Neuroimaging results indicated that the retrosplenial cortex (RSC) and orbitofrontal cortex (OFC) exhibited highly sensitive, graded responses to the compatibility of facial and contextual cues. These regions showed linearly increasing responses as a face became more White when in an American context, and linearly increasing responses as a face became more Asian when in a Chinese context. Further, RSC activity partially mediated the effect of this face-context compatibility on the efficiency of categorization responses. Together, the findings suggest a critical role of the RSC and OFC in driving contextual influences on face categorization, and highlight the impact of extraneous cues beyond the face in categorizing other people. PMID:24006403
Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor
2016-01-01
Analysis of user interactions in online communities could improve our understanding of health-related behaviors and inform the design of technological solutions that support behavior change. However, to achieve this we would need methods that provide granular perspective, yet are scalable. In this paper, we present a methodology for high-throughput semantic and network analysis of large social media datasets, combining semi-automated text categorization with social network analytics. We apply this method to derive content-specific network visualizations of 16,492 user interactions in an online community for smoking cessation. Performance of the categorization system was reasonable (average F-measure of 0.74, with system-rater reliability approaching rater-rater reliability). The resulting semantically specific network analysis of user interactions reveals content- and behavior-specific network topologies. Implications for socio-behavioral health and wellness platforms are also discussed.
A Behavioral Study of Distraction by Vibrotactile Novelty
ERIC Educational Resources Information Center
Parmentier, Fabrice B. R.; Ljungberg, Jessica K.; Elsley, Jane V.; Lindkvist, Markus
2011-01-01
Past research has demonstrated that the occurrence of unexpected task-irrelevant changes in the auditory or visual sensory channels captured attention in an obligatory fashion, hindering behavioral performance in ongoing auditory or visual categorization tasks and generating orientation and re-orientation electrophysiological responses. We report…
Emotional voice and emotional body postures influence each other independently of visual awareness.
Stienen, Bernard M C; Tanaka, Akihiro; de Gelder, Beatrice
2011-01-01
Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.
An Interactive Approach to Learning and Teaching in Visual Arts Education
ERIC Educational Resources Information Center
Tomljenovic, Zlata
2015-01-01
The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts) teaching methods. The study uses quantitative…
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2017-01-01
It remains controversial whether visual awareness is correlated with early activation indicated by VAN (visual awareness negativity), as the recurrent process hypothesis theory proposes, or with later activation indicated by P3 or LP (late positive), as suggested by global workspace theories. To address this issue, a backward masking task was adopted, in which participants were first asked to categorize natural scenes of color photographs and line-drawings and then to rate the clarity of their visual experience on a Perceptual Awareness Scale (PAS). The interstimulus interval between the scene and the mask was manipulated. The behavioral results showed that categorization accuracy increased with PAS ratings for both color photographs and line-drawings, with no difference in accuracy between the two types of images for each rating, indicating that the experience rating reflected visibility. Importantly, the event-related potential (ERP) results revealed that for correct trials, the early posterior N1 and anterior P2 components changed with the PAS ratings for color photographs, but did not vary with the PAS ratings for line-drawings, indicating that the N1 and P2 do not always correlate with subjective visual awareness. Moreover, for both types of images, the anterior N2 and posterior VAN changed with the PAS ratings in a linear way, while the LP changed with the PAS ratings in a non-linear way, suggesting that these components relate to different types of subjective awareness. The results reconcile the apparently contradictory predictions of different theories and help to resolve the current debate on neural correlates of visual awareness.
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2017-01-01
It remains controversial whether visual awareness is correlated with early activation indicated by VAN (visual awareness negativity), as the recurrent process hypothesis theory proposes, or with later activation indicated by P3 or LP (late positive), as suggested by global workspace theories. To address this issue, a backward masking task was adopted, in which participants were first asked to categorize natural scenes of color photographs and line-drawings and then to rate the clarity of their visual experience on a Perceptual Awareness Scale (PAS). The interstimulus interval between the scene and the mask was manipulated. The behavioral results showed that categorization accuracy increased with PAS ratings for both color photographs and line-drawings, with no difference in accuracy between the two types of images for each rating, indicating that the experience rating reflected visibility. Importantly, the event-related potential (ERP) results revealed that for correct trials, the early posterior N1 and anterior P2 components changed with the PAS ratings for color photographs, but did not vary with the PAS ratings for line-drawings, indicating that the N1 and P2 do not always correlate with subjective visual awareness. Moreover, for both types of images, the anterior N2 and posterior VAN changed with the PAS ratings in a linear way, while the LP changed with the PAS ratings in a non-linear way, suggesting that these components relate to different types of subjective awareness. The results reconcile the apparently contradictory predictions of different theories and help to resolve the current debate on neural correlates of visual awareness. PMID:28261141
Indication criteria for cataract extraction and gender differences in waiting time.
Smirthwaite, Goldina; Lundström, Mats; Albrecht, Susanne; Swahnberg, Katarina
2014-08-01
The purpose of this study was to investigate national indication criteria tool for cataract extraction (NIKE), a clinical tool for establishing levels of indications for cataract surgery, in relation to gender differences in waiting times for cataract extraction (CE). Data were collected by The Swedish National Cataract Register (NCR). Eye clinics report to NCR voluntarily and on regular basis (98% coverage). Comparisons regarding gender difference in waiting times were performed between NIKE-categorized and non-NIKE-categorized patients, as well as between different indication groups within the NIKE-system. All calculations were performed in spss version 20. Multivariate analyses were carried out using logistic regression, and single variable analyses were carried out by Student's t-test or chi square as appropriate. Gender, age, visual acuity and NIKE-categorization were associated with waiting time. Female patients had a longer waiting time to CE than male, both within and outside the NIKE-system. Gender difference in waiting time was somewhat larger among patients who had not been categorized by NIKE. In the non-NIKE-categorized group, women waited 0.20 months longer than men. In the group which was NIKE-categorized, women waited 0.18 months longer than men. It is reasonable to assume that prioritizing patients by means of NIKE helps to reduce the gender differences in waiting time. Gender differences in waiting time have decreased as NIKE was introduced and there may be a variety of explanations for this. However, with the chosen study design, we could not distinguish between effects related to NIKE and those due to other factors which occurred during the study period. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Distributed encoding of spatial and object categories in primate hippocampal microcircuits
Opris, Ioan; Santos, Lucas M.; Gerhardt, Greg A.; Song, Dong; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.
2015-01-01
The primate hippocampus plays critical roles in the encoding, representation, categorization and retrieval of cognitive information. Such cognitive abilities may use the transformational input-output properties of hippocampal laminar microcircuitry to generate spatial representations and to categorize features of objects, images, and their numeric characteristics. Four nonhuman primates were trained in a delayed-match-to-sample (DMS) task while multi-neuron activity was simultaneously recorded from the CA1 and CA3 hippocampal cell fields. The results show differential encoding of spatial location and categorization of images presented as relevant stimuli in the task. Individual hippocampal cells encoded visual stimuli only on specific types of trials in which retention of either, the Sample image, or the spatial position of the Sample image indicated at the beginning of the trial, was required. Consistent with such encoding, it was shown that patterned microstimulation applied during Sample image presentation facilitated selection of either Sample image spatial locations or types of images, during the Match phase of the task. These findings support the existence of specific codes for spatial and numeric object representations in primate hippocampus which can be applied on differentially signaled trials. Moreover, the transformational properties of hippocampal microcircuitry, together with the patterned microstimulation are supporting the practical importance of this approach for cognitive enhancement and rehabilitation, needed for memory neuroprosthetics. PMID:26500473
Rapid natural scene categorization in the near absence of attention
Li, Fei Fei; VanRullen, Rufin; Koch, Christof; Perona, Pietro
2002-01-01
What can we see when we do not pay attention? It is well known that we can be “blind” even to major aspects of natural scenes when we attend elsewhere. The only tasks that do not need attention appear to be carried out in the early stages of the visual system. Contrary to this common belief, we report that subjects can rapidly detect animals or vehicles in briefly presented novel natural scenes while simultaneously performing another attentionally demanding task. By comparison, they are unable to discriminate large T's from L's, or bisected two-color disks from their mirror images under the same conditions. We conclude that some visual tasks associated with “high-level” cortical areas may proceed in the near absence of attention. PMID:12077298
The Convolutional Visual Network for Identification and Reconstruction of NOvA Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Psihas, Fernanda
In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearancemore » signal by 40% and studies show potential impact to the vμ disappearance analysis.« less
Visualizing Trumps Vision in Training Attention.
Reinhart, Robert M G; McClenahan, Laura J; Woodman, Geoffrey F
2015-07-01
Mental imagery can have powerful training effects on behavior, but how this occurs is not well understood. Here we show that even a single instance of mental imagery can improve attentional selection of a target more effectively than actually practicing visual search. By recording subjects' brain activity, we found that these imagery-induced training effects were due to perceptual attention being more effectively focused on targets following imagined training. Next, we examined the downside of this potent training by changing the target after several trials of training attention with imagery and found that imagined search resulted in more potent interference than actual practice following these target changes. Finally, we found that proactive interference from task-irrelevant elements in the visual displays appears to underlie the superiority of imagined training relative to actual practice. Our findings demonstrate that visual attention mechanisms can be effectively trained to select target objects in the absence of visual input, and this results in more effective control of attention than practicing the task itself. © The Author(s) 2015.
Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning
ERIC Educational Resources Information Center
Bartolucci, Marco; Smith, Andrew T.
2011-01-01
Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…
Mishler, Ada D; Neider, Mark B
2017-11-01
The redundant signals effect, a speed-up in response times with multiple targets compared to a single target in one display, is well-documented, with some evidence suggesting that it can occur even in conceptual processing when targets are presented bilaterally. The current study was designed to determine whether or not category-based redundant signals can speed up processing even without bilateral presentation. Toward that end, participants performed a go/no-go visual task in which they responded only to members of the target category (i.e., they responded only to numbers and did not respond to letters). Numbers and letters were presented along an imaginary vertical line in the center of the visual field. When the single signal trials contained a nontarget letter (Experiment 1), there was a significant redundant signals effect. The effect was not significant when the single-signal trials did not contain a nontarget letter (Experiments 2 and 3). The results indicate that, when targets are defined categorically and not presented bilaterally, the redundant signals effect may be an effect of reducing the presence of information that draws attention away from the target. This suggests that redundant signals may not speed up conceptual processing when interhemispheric presentation is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
Making perceptual learning practical to improve visual functions.
Polat, Uri
2009-10-01
Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.
Methodology Series Module 8: Designing Questionnaires and Clinical Record Forms
Setia, Maninder Singh
2017-01-01
As researchers, we often collect data on a clinical record form or a questionnaire. It is an important part of study design. If the questionnaire is not well designed, the data collected will not be useful. In this section of the module, we have discussed some practical aspects of designing a questionnaire. It is useful to make a list of all the variables that will be assessed in the study before preparing the questionnaire. The researcher should review all the existing questionnaires. It may be efficient to use an existing standardized questionnaire or scale. Many of these scales are freely available and may be used with an appropriate reference. However, some may be under copyright protection and permissions may be required to use the same questionnaire. While designing their own questionnaire, researchers may use open- or close-ended questions. It is important to design the responses appropriately as the format of responses will influence the analysis. Sometimes, one can collect the same information in multiple ways - continuous or categorical response. Besides these, the researcher can also use visual analog scales or Likert's scale in the questionnaire. Some practical take-home points are: (1) Use specific language while framing the questions; (2) write detailed instructions in the questionnaire; (3) use mutually exclusive response categories; (4) use skip patterns; (5) avoid double-barreled questions; and (6) anchor the time period if required. PMID:28400630
Methodology Series Module 8: Designing Questionnaires and Clinical Record Forms.
Setia, Maninder Singh
2017-01-01
As researchers, we often collect data on a clinical record form or a questionnaire. It is an important part of study design. If the questionnaire is not well designed, the data collected will not be useful. In this section of the module, we have discussed some practical aspects of designing a questionnaire. It is useful to make a list of all the variables that will be assessed in the study before preparing the questionnaire. The researcher should review all the existing questionnaires. It may be efficient to use an existing standardized questionnaire or scale. Many of these scales are freely available and may be used with an appropriate reference. However, some may be under copyright protection and permissions may be required to use the same questionnaire. While designing their own questionnaire, researchers may use open- or close-ended questions. It is important to design the responses appropriately as the format of responses will influence the analysis. Sometimes, one can collect the same information in multiple ways - continuous or categorical response. Besides these, the researcher can also use visual analog scales or Likert's scale in the questionnaire. Some practical take-home points are: (1) Use specific language while framing the questions; (2) write detailed instructions in the questionnaire; (3) use mutually exclusive response categories; (4) use skip patterns; (5) avoid double-barreled questions; and (6) anchor the time period if required.
Categorical Perception of Chinese Characters by Simplified and Traditional Chinese Readers
ERIC Educational Resources Information Center
Yang, Ruoxiao; Wang, William Shi Yuan
2018-01-01
Recent research has shown that the visual complexity of orthographies across writing systems influences the development of orthographic representations. Simplified and traditional Chinese characters are usually regarded as the most visually complicated writing systems currently in use, with the traditional system showing a higher level of…
Representing and Inferring Visual Perceptual Skills in Dermatological Image Understanding
ERIC Educational Resources Information Center
Li, Rui
2013-01-01
Experts have a remarkable capability of locating, perceptually organizing, identifying, and categorizing objects in images specific to their domains of expertise. Eliciting and representing their visual strategies and some aspects of domain knowledge will benefit a wide range of studies and applications. For example, image understanding may be…
Parametric embedding for class visualization.
Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B
2007-09-01
We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.
The perception of visual emotion: comparing different measures of awareness.
Szczepanowski, Remigiusz; Traczyk, Jakub; Wierzchoń, Michał; Cleeremans, Axel
2013-03-01
Here, we explore the sensitivity of different awareness scales in revealing conscious reports on visual emotion perception. Participants were exposed to a backward masking task involving fearful faces and asked to rate their conscious awareness in perceiving emotion in facial expression using three different subjective measures: confidence ratings (CRs), with the conventional taxonomy of certainty, the perceptual awareness scale (PAS), through which participants categorize "raw" visual experience, and post-decision wagering (PDW), which involves economic categorization. Our results show that the CR measure was the most exhaustive and the most graded. In contrast, the PAS and PDW measures suggested instead that consciousness of emotional stimuli is dichotomous. Possible explanations of the inconsistency were discussed. Finally, our results also indicate that PDW biases awareness ratings by enhancing first-order accuracy of emotion perception. This effect was possibly a result of higher motivation induced by monetary incentives. Copyright © 2012 Elsevier Inc. All rights reserved.
Basic level category structure emerges gradually across human ventral visual cortex.
Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li
2015-07-01
Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.
Medical examiner/death investigator training requirements in state medical examiner systems.
Prahlow, J A; Lantz, P E
1995-01-01
Comprehensive and properly performed investigation of suspicious, unusual, unnatural, and various natural deaths is necessary to maintain the health, safety, and well-being of society as a whole. Adequate investigation requires the combined efforts and cooperation of law-enforcement and other public-service agencies, medical professionals, and those within the forensic community. As such, the "death investigator" plays a crucial role in the investigation process. These front-line investigators, whether they be coroners, medical examiners, physicians, other medical professionals, or lay-people, are required to make important decisions which have far-reaching consequences on how death investigation cases proceed. Death investigation practices vary greatly among medico-legal jurisdictions. A recent publication has categorized state death investigation systems by type of system. In an attempt to better delineate death investigation practices with specific regard to investigators' training and continuing education requirements, we surveyed the 20 systems categorized as state medical examiner systems and the five states with combined state medical examiner and county coroner/medical examiner systems. We present our findings and make recommendations which address the attributes and deficiencies of current death investigation practices.
Clark, Thomas R
2008-09-01
To understand the importance of services provided by consultant pharmacists and to assess perception of their performance of services. Cross-sectional; nursing facility team. Random e-mail survey of consultant pharmacists; phone survey of team members. 233 consultant pharmacists (practicing in a nursing facility); 540 team members (practicing in a nursing facility, interacting with > or = 1 consultant pharmacist): 120 medical directors, 210 directors of nursing, 210 administrators. Consultant pharmacists, directors of nursing, medical directors, and administrators rating importance/performance of 21 services. Gap between teams' ratings of importance and consultant pharmacists' performance is assessed to categorize services. Importance/performance ranked on five-point scale. Mean scores used for gap analysis to cluster services into four categories. Per combined group, six services categorized as "Keep It Up" (important, good performance), consensus with individual groups, except discrepancy with medical directors, for one service. Six services each categorized as "Improve" (important, large gap) and "Improve Second" (lower importance, large gap), with varied responses by individual groups. Three different services were categorized into "Don't Worry," with consensus within individual groups. Consensus from all groups found 5 of 21 services are important and performed well by consultant pharmacists, indicating to maintain performance of services. For three services, consultant pharmacists do not need to worry about their performance. Thirteen services require improvement in consultant pharmacists' performance; various groups differ on extent of improvement needed. Results can serve as benchmark comparisons with results obtained by consultant pharmacists in their own facilities.
Fuller, Alison
2010-01-01
Sure Start has been a flagship policy for the UK Labour Government since 1998. Its aim was to improve the life chances of children under five years of age who live in areas of socio-economic disadvantage by means of multi-agency, multidisciplinary Sure Start Local Programmes (SSLPs). Speech and language therapists have played a key part in many SSLPs, and have had the opportunity to extend their roles. Despite the scrutiny paid to Sure Start, there has been no comprehensive analysis of speech and language therapists' contribution to date. Studies have focused on individual programmes or small samples: there has been no attempt to collate the full range of practice. As Sure Start evolved and Children's Centres emerged, it became vital to learn from the Sure Start experience and inform the mainstreaming of practice, before the window of opportunity closed. The survey aims were, firstly, to identify the range of practice amongst speech and language therapists working in SSLPs, highlighting new practice, and, secondly, to categorize the practices according to the tiered model of UK health and social services of the Royal College of Speech and Language Therapists (RCSLT 2006). An online mixed-method, semi-structured survey was designed to elicit primarily quantitative and categorical data. A total of 501 Sure Start Local Programmes were invited to take part. A total of 128 speech and language therapists responded, giving a response rate of 26%. A descriptive analysis of the response data was undertaken. A total of 103 respondents (80%) reported maintaining a clinical role as well as extending their roles to include preventative services. Of those 103 respondents, 69% were able to see referred children at a younger average age and 80% saw them more quickly than before Sure Start. A wide variety of preventative practice was identified. A widening of access to speech and language therapist was reported in terms of venues used and hours offered. Respondents reported on their use of evaluation or outcome measures, which was at a higher rate for new practice than for established practice. A total of 121 respondents (95%) reported at least one example of new practice; 103 (80%) reported at least one use of evaluation or outcome measures. The tiered model of UK health and social services provided an effective way of categorizing practice. A categorized record of Sure Start speech and language therapist is presented that may contribute to establishing a broad curriculum of practice for speech and language therapist in the early years. The effectiveness of the practices is not investigated: suggestions are made for further research to develop the evidence base. 2010 Royal College of Speech & Language Therapists.
Chokron, Sylvie; Dutton, Gordon N.
2016-01-01
Cerebral visual impairment (CVI) has become the primary cause of visual impairment and blindness in children in industrialized countries. Its prevalence has increased sharply, due to increased survival rates of children who sustain severe neurological conditions during the perinatal period. Improved diagnosis has probably contributed to this increase. As in adults, the nature and severity of CVI in children relate to the cause, location and extent of damage to the brain. In the present paper, we define CVI and how this impacts on visual function. We then define developmental coordination disorder (DCD) and discuss the link between CVI and DCD. The neuroanatomical correlates and aetiologies of DCD are also presented in relationship with CVI as well as the consequences of perinatal asphyxia (PA) and preterm birth on the occurrence and nature of DCD and CVI. This paper underlines why there are both clinical and theoretical reasons to disentangle CVI and DCD, and to categorize the features with more precision. In order to offer the most appropriate rehabilitation, we propose a systematic and rapid evaluation of visual function in at-risk children who have survived preterm birth or PA whether or not they have been diagnosed with cerebral palsy or DCD. PMID:27757087
Categorical clustering of the neural representation of color.
Brouwer, Gijs Joost; Heeger, David J
2013-09-25
Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons.
Behavioral demand modulates object category representation in the inferior temporal cortex
Emadi, Nazli
2014-01-01
Visual object categorization is a critical task in our daily life. Many studies have explored category representation in the inferior temporal (IT) cortex at the level of single neurons and population. However, it is not clear how behavioral demands modulate this category representation. Here, we recorded from the IT single neurons in monkeys performing two different tasks with identical visual stimuli: passive fixation and body/object categorization. We found that category selectivity of the IT neurons was improved in the categorization compared with the passive task where reward was not contingent on image category. The category improvement was the result of larger rate enhancement for the preferred category and smaller response variability for both preferred and nonpreferred categories. These specific modulations in the responses of IT category neurons enhanced signal-to-noise ratio of the neural responses to discriminate better between the preferred and nonpreferred categories. Our results provide new insight into the adaptable category representation in the IT cortex, which depends on behavioral demands. PMID:25080572
Visualized analysis of mixed numeric and categorical data via extended self-organizing map.
Hsu, Chung-Chian; Lin, Shu-Han
2012-01-01
Many real-world datasets are of mixed types, having numeric and categorical attributes. Even though difficult, analyzing mixed-type datasets is important. In this paper, we propose an extended self-organizing map (SOM), called MixSOM, which utilizes a data structure distance hierarchy to facilitate the handling of numeric and categorical values in a direct, unified manner. Moreover, the extended model regularizes the prototype distance between neighboring neurons in proportion to their map distance so that structures of the clusters can be portrayed better on the map. Extensive experiments on several synthetic and real-world datasets are conducted to demonstrate the capability of the model and to compare MixSOM with several existing models including Kohonen's SOM, the generalized SOM and visualization-induced SOM. The results show that MixSOM is superior to the other models in reflecting the structure of the mixed-type data and facilitates further analysis of the data such as exploration at various levels of granularity.
Zhong, Weifang; Li, You; Huang, Yulan; Li, He; Mo, Lei
2018-01-01
This study investigated whether and how a person's varied series of lexical categories corresponding to different discriminatory characteristics of the same colors affect his or her perception of colors. In three experiments, Chinese participants were primed to categorize four graduated colors-specifically dark green, light green, light blue, and dark blue-into green and blue; light color and dark color; and dark green, light green, light blue, and dark blue. The participants were then required to complete a visual search task. Reaction times in the visual search task indicated that different lateralized categorical perceptions (CPs) of color corresponded to the various priming situations. These results suggest that all of the lexical categories corresponding to different discriminatory characteristics of the same colors can influence people's perceptions of colors and that color perceptions can be influenced differently by distinct types of lexical categories depending on the context. Copyright © 2017 Cognitive Science Society, Inc.
Categorical Clustering of the Neural Representation of Color
Heeger, David J.
2013-01-01
Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons. PMID:24068814
Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki
2015-10-01
Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.
Matsumoto, Narihisa; Eldridge, Mark A G; Saunders, Richard C; Reoli, Rachel; Richmond, Barry J
2016-01-06
In primates, visual recognition of complex objects depends on the inferior temporal lobe. By extension, categorizing visual stimuli based on similarity ought to depend on the integrity of the same area. We tested three monkeys before and after bilateral anterior inferior temporal cortex (area TE) removal. Although mildly impaired after the removals, they retained the ability to assign stimuli to previously learned categories, e.g., cats versus dogs, and human versus monkey faces, even with trial-unique exemplars. After the TE removals, they learned in one session to classify members from a new pair of categories, cars versus trucks, as quickly as they had learned the cats versus dogs before the removals. As with the dogs and cats, they generalized across trial-unique exemplars of cars and trucks. However, as seen in earlier studies, these monkeys with TE removals had difficulty learning to discriminate between two simple black and white stimuli. These results raise the possibility that TE is needed for memory of simple conjunctions of basic features, but that it plays only a small role in generalizing overall configural similarity across a large set of stimuli, such as would be needed for perceptual categorical assignment. The process of seeing and recognizing objects is attributed to a set of sequentially connected brain regions stretching forward from the primary visual cortex through the temporal lobe to the anterior inferior temporal cortex, a region designated area TE. Area TE is considered the final stage for recognizing complex visual objects, e.g., faces. It has been assumed, but not tested directly, that this area would be critical for visual generalization, i.e., the ability to place objects such as cats and dogs into their correct categories. Here, we demonstrate that monkeys rapidly and seemingly effortlessly categorize large sets of complex images (cats vs dogs, cars vs trucks), surprisingly, even after removal of area TE, leaving a puzzle about how this generalization is done. Copyright © 2016 the authors 0270-6474/16/360043-11$15.00/0.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
The taste-visual cross-modal Stroop effect: An event-related brain potential study.
Xiao, X; Dupuis-Roy, N; Yang, X L; Qiu, J F; Zhang, Q L
2014-03-28
Event-related potentials (ERPs) were recorded to explore, for the first time, the electrophysiological correlates of the taste-visual cross-modal Stroop effect. Eighteen healthy participants were presented with a taste stimulus and a food image, and asked to categorize the image as "sweet" or "sour" by pressing the relevant button as quickly as possible. Accurate categorization of the image was faster when it was presented with a congruent taste stimulus (e.g., sour taste/image of lemon) than with an incongruent one (e.g., sour taste/image of ice cream). ERP analyses revealed a negative difference component (ND430-620) between 430 and 620ms in the taste-visual cross-modal Stroop interference. Dipole source analysis of the difference wave (incongruent minus congruent) indicated that two generators localized in the prefrontal cortex and the parahippocampal gyrus contributed to this taste-visual cross-modal Stroop effect. This result suggests that the prefrontal cortex is associated with the process of conflict control in the taste-visual cross-modal Stroop effect. Also, we speculate that the parahippocampal gyrus is associated with the process of discordant information in the taste-visual cross-modal Stroop effect. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Idelevich, Evgeny A; Becker, Karsten; Schmitz, Janne; Knaack, Dennis; Peters, Georg; Köck, Robin
2016-01-01
Results of disk diffusion antimicrobial susceptibility testing depend on individual visual reading of inhibition zone diameters. Therefore, automated reading using camera systems might represent a useful tool for standardization. In this study, the ADAGIO automated system (Bio-Rad) was evaluated for reading disk diffusion tests of fastidious bacteria. 144 clinical isolates (68 β-haemolytic streptococci, 28 Streptococcus pneumoniae, 18 viridans group streptococci, 13 Haemophilus influenzae, 7 Moraxella catarrhalis, and 10 Campylobacter jejuni) were tested on Mueller-Hinton agar supplemented with 5% defibrinated horse blood and 20 mg/L β-NAD (MH-F, Oxoid) according to EUCAST. Plates were read manually with a ruler and automatically using the ADAGIO system. Inhibition zone diameters, indicated by the automated system, were visually controlled and adjusted, if necessary. Among 1548 isolate-antibiotic combinations, comparison of automated vs. manual reading yielded categorical agreement (CA) without visual adjustment of the automatically determined zone diameters in 81.4%. In 20% (309 of 1548) of tests it was deemed necessary to adjust the automatically determined zone diameter after visual control. After adjustment, CA was 94.8%; very major errors (false susceptible interpretation), major errors (false resistant interpretation) and minor errors (false categorization involving intermediate result), calculated according to the ISO 20776-2 guideline, accounted to 13.7% (13 of 95 resistant results), 3.3% (47 of 1424 susceptible results) and 1.4% (21 of 1548 total results), respectively, compared to manual reading. The ADAGIO system allowed for automated reading of disk diffusion testing in fastidious bacteria and, after visual validation of the automated results, yielded good categorical agreement with manual reading.
Bullich, Santiago; Seibyl, John; Catafau, Ana M; Jovalekic, Aleksandar; Koglin, Norman; Barthel, Henryk; Sabri, Osama; De Santi, Susan
2017-01-01
Standardized uptake value ratios (SUVRs) calculated from cerebral cortical areas can be used to categorize 18 F-Florbetaben (FBB) PET scans by applying appropriate cutoffs. The objective of this work was first to generate FBB SUVR cutoffs using visual assessment (VA) as standard of truth (SoT) for a number of reference regions (RR) (cerebellar gray matter (GCER), whole cerebellum (WCER), pons (PONS), and subcortical white matter (SWM)). Secondly, to validate the FBB PET scan categorization performed by SUVR cutoffs against the categorization made by post-mortem histopathological confirmation of the Aβ presence. Finally, to evaluate the added value of SUVR cutoff categorization to VA. SUVR cutoffs were generated for each RR using FBB scans from 143 subjects who were visually assessed by 3 readers. SUVR cutoffs were validated in 78 end-of life subjects using VA from 8 independent blinded readers (3 expert readers and 5 non-expert readers) and histopathological confirmation of the presence of neuritic beta-amyloid plaques as SoT. Finally, the number of correctly or incorrectly classified scans according to pathology results using VA and SUVR cutoffs was compared. Composite SUVR cutoffs generated were 1.43 (GCER), 0.96 (WCER), 0.78 (PONS) and 0.71 (SWM). Accuracy values were high and consistent across RR (range 83-94% for histopathology, and 85-94% for VA). SUVR cutoff performed similarly as VA but did not improve VA classification of FBB scans read either by expert readers or the majority read but provided higher accuracy than some non-expert readers. The accurate scan classification obtained in this study supports the use of VA as SoT to generate site-specific SUVR cutoffs. For an elderly end of life population, VA and SUVR cutoff categorization perform similarly in classifying FBB scans as Aβ-positive or Aβ-negative. These results emphasize the additional contribution that SUVR cutoff classification may have compared with VA performed by non-expert readers.
No Role for Motor Affordances in Visual Working Memory
ERIC Educational Resources Information Center
Pecher, Diane
2013-01-01
Motor affordances have been shown to play a role in visual object identification and categorization. The present study explored whether working memory is likewise supported by motor affordances. Use of motor affordances should be disrupted by motor interference, and this effect should be larger for objects that have motor affordances than for…
ERP Evidence of Hemispheric Independence in Visual Word Recognition
ERIC Educational Resources Information Center
Nemrodov, Dan; Harpaz, Yuval; Javitt, Daniel C.; Lavidor, Michal
2011-01-01
This study examined the capability of the left hemisphere (LH) and the right hemisphere (RH) to perform a visual recognition task independently as formulated by the Direct Access Model (Fernandino, Iacoboni, & Zaidel, 2007). Healthy native Hebrew speakers were asked to categorize nouns and non-words (created from nouns by transposing two middle…
Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
2015-01-01
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
System for pathology categorization and retrieval in chest radiographs
NASA Astrophysics Data System (ADS)
Avni, Uri; Greenspan, Hayit; Konen, Eli; Sharon, Michal; Goldberger, Jacob
2011-03-01
In this paper we present an overview of a system we have been developing for the past several years for efficient image categorization and retrieval in large radiograph archives. The methodology is based on local patch representation of the image content, using a bag of visual words approach and similarity-based categorization with a kernel based SVM classifier. We show an application to pathology-level categorization of chest x-ray data, the most popular examination in radiology. Our study deals with pathology detection and identification of individual pathologies including right and left pleural effusion, enlarged heart and cases of enlarged mediastinum. The input from a radiologist provided a global label for the entire image (healthy/pathology), and the categorization was conducted on the entire image, with no need for segmentation algorithms or any geometrical rules. An automatic diagnostic-level categorization, even on such an elementary level as healthy vs pathological, provides a useful tool for radiologists on this popular and important examination. This is a first step towards similarity-based categorization, which has a major clinical implications for computer-assisted diagnostics.
Rakison, David H; Yermolayeva, Yevdokiya
2010-11-01
In this article, we review the principal findings on infant categorization from the last 30 years. The review focuses on behaviorally based experiments with visual preference, habituation, object examining, sequential touching, and inductive generalization procedures. We propose that although this research has helped to elucidate the 'what' and 'when' of infant categorization, it has failed to clarify the mechanisms that underpin this behavior and the development of concepts. We outline a number of reasons for why the field has failed in this regard, most notably because of the context-specific nature of infant categorization and a lack of ground rules in interpreting data. We conclude by suggesting that one remedy for this issue is for infant categorization researchers to adopt more of an interdisciplinary approach by incorporating imaging and computational methods into their current methodological arsenal. WIREs Cogn Sci 2010 1 894-905 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.
Schiller, Claire; Winters, Meghan; Hanson, Heather M; Ashe, Maureen C
2013-05-02
Stakeholders, as originally defined in theory, are groups or individual who can affect or are affected by an issue. Stakeholders are an important source of information in health research, providing critical perspectives and new insights on the complex determinants of health. The intersection of built and social environments with older adult mobility is an area of research that is fundamentally interdisciplinary and would benefit from a better understanding of stakeholder perspectives. Although a rich body of literature surrounds stakeholder theory, a systematic process for identifying health stakeholders in practice does not exist. This paper presents a framework of stakeholders related to older adult mobility and the built environment, and further outlines a process for systematically identifying stakeholders that can be applied in other health contexts, with a particular emphasis on concept mapping research. Informed by gaps in the relevant literature we developed a framework for identifying and categorizing health stakeholders. The framework was created through a novel iterative process of stakeholder identification and categorization. The development entailed a literature search to identify stakeholder categories, representation of identified stakeholders in a visual chart, and correspondence with expert informants to obtain practice-based insight. The three-step, iterative creation process progressed from identifying stakeholder categories, to identifying specific stakeholder groups and soliciting feedback from expert informants. The result was a stakeholder framework comprised of seven categories with detailed sub-groups. The main categories of stakeholders were, (1) the Public, (2) Policy makers and governments, (3) Research community, (4) Practitioners and professionals, (5) Health and social service providers, (6) Civil society organizations, and (7) Private business. Stakeholders related to older adult mobility and the built environment span many disciplines and realms of practice. Researchers studying this issue may use the detailed stakeholder framework process we present to identify participants for future projects. Health researchers pursuing stakeholder-based projects in other contexts are encouraged to incorporate this process of stakeholder identification and categorization to ensure systematic consideration of relevant perspectives in their work.
2013-01-01
Background Stakeholders, as originally defined in theory, are groups or individual who can affect or are affected by an issue. Stakeholders are an important source of information in health research, providing critical perspectives and new insights on the complex determinants of health. The intersection of built and social environments with older adult mobility is an area of research that is fundamentally interdisciplinary and would benefit from a better understanding of stakeholder perspectives. Although a rich body of literature surrounds stakeholder theory, a systematic process for identifying health stakeholders in practice does not exist. This paper presents a framework of stakeholders related to older adult mobility and the built environment, and further outlines a process for systematically identifying stakeholders that can be applied in other health contexts, with a particular emphasis on concept mapping research. Methods Informed by gaps in the relevant literature we developed a framework for identifying and categorizing health stakeholders. The framework was created through a novel iterative process of stakeholder identification and categorization. The development entailed a literature search to identify stakeholder categories, representation of identified stakeholders in a visual chart, and correspondence with expert informants to obtain practice-based insight. Results The three-step, iterative creation process progressed from identifying stakeholder categories, to identifying specific stakeholder groups and soliciting feedback from expert informants. The result was a stakeholder framework comprised of seven categories with detailed sub-groups. The main categories of stakeholders were, (1) the Public, (2) Policy makers and governments, (3) Research community, (4) Practitioners and professionals, (5) Health and social service providers, (6) Civil society organizations, and (7) Private business. Conclusions Stakeholders related to older adult mobility and the built environment span many disciplines and realms of practice. Researchers studying this issue may use the detailed stakeholder framework process we present to identify participants for future projects. Health researchers pursuing stakeholder-based projects in other contexts are encouraged to incorporate this process of stakeholder identification and categorization to ensure systematic consideration of relevant perspectives in their work. PMID:23639179
Heier, Jeffrey S; Bressler, Neil M; Avery, Robert L; Bakri, Sophie J; Boyer, David S; Brown, David M; Dugel, Pravin U; Freund, K Bailey; Glassman, Adam R; Kim, Judy E; Martin, Daniel F; Pollack, John S; Regillo, Carl D; Rosenfeld, Philip J; Schachat, Andrew P; Wells, John A
2016-01-01
The Diabetic Retinopathy Clinical Research Network (DRCR Network), sponsored by the National Eye Institute, reported the results of a comparative effectiveness randomized clinical trial (RCT) evaluating the 3 anti-vascular endothelial growth factor (anti-VEGF) agents aflibercept (2.0 mg), bevacizumab (1.25 mg), and ranibizumab (0.3 mg) for treatment of diabetic macular edema (DME) involving the center of the retina and associated with visual acuity loss. The many important findings of the RCT prompted the American Society of Retina Specialists to convene a group of experts to provide their perspective regarding clinically relevant findings of the study. To describe specific outcomes of the RCT judged worthy of highlighting, to discuss how these and other clinically relevant results should be considered by specialists treating DME, and to identify unanswered questions that merit consideration before treatment. The DRCR Network-authored publication on primary outcomes of the comparative effectiveness RCT at 89 sites in the United States. The study period of the RCT was August 22, 2012, to August 28, 2013. On average, all 3 anti-VEGF agents led to improved visual acuity in eyes with DME involving the center of the retina and with visual acuity impairment, including mean (SD) improvements by +13.3 (11.1) letters with aflibercept vs +9.7 (10.1) letters with bevacizumab (P < .001) and +11.2 (9.4) letters with ranibizumab (P = .03). Worse visual acuity when initiating therapy was associated with greater visual acuity benefit of aflibercept (+18.9 [11.5]) over bevacizumab (+11.8 [12.0]) or ranibizumab (14.2 [10.6]) 1 year later (P < .001 for interaction with visual acuity as a continuous variable, and P = .002 for interaction with visual acuity as a categorical variable). It is unknown whether different visual acuity outcomes associated with the use of the 3 anti-VEGF agents would be noted with other treatment regimens or with adequately repackaged bevacizumab, as well as in patients with criteria that excluded them from the RCT, such as persistent DME despite recent anti-VEGF treatment. On average, all 3 anti-VEGF agents led to improved visual acuity in eyes with DME involving the center of the retina and visual acuity impairment. Worse visual acuity when initiating therapy was associated with greater visual acuity benefit of aflibercept over bevacizumab or ranibizumab 1 year later. Care needs to be taken when attempting to extrapolate outcomes of this RCT to differing treatment regimens. With access to adequately repackaged bevacizumab, many specialists might initiate therapy with bevacizumab when visual acuity is good (ie, 20/32 to 20/40 as measured in the DRCR Network), recognizing that the cost-effectiveness of bevacizumab outweighs that of aflibercept or ranibizumab.
Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG
Costanzo, Michelle E.; McArdle, Joseph J.; Swett, Bruce; Nechaev, Vladimir; Kemeny, Stefan; Xu, Jiang; Braun, Allen R.
2013-01-01
The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization—the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization—specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items. PMID:23847490
Categorical organization in free recall across culture and age.
Gutchess, Angela H; Yoon, Carolyn; Luo, Ting; Feinberg, Fred; Hedden, Trey; Jing, Qicheng; Nisbett, Richard E; Park, Denise C
2006-01-01
Cross-cultural differences in cognition suggest that Westerners use categories more than Easterners, but these differences have only been investigated in young adults. The contributions of cognitive resource and the extent of cultural exposure are explored for free recall by investigating cross-cultural differences in categorical organization in younger and older adults. Cultural differences in the use of categories should be larger for elderly than young because categorization is a well-practiced strategy for Westerners, but age-related cognitive resource limitations may make the strategy difficult for elderly Easterners to implement. Therefore, we expect that cultural differences in categorization will be magnified in elderly adults relative to younger adults, with Americans categorizing more than Chinese. Across two studies, 112 young and 112 elderly drawn from two cultures (American and Chinese) encoded words presented in their native language. One word list contained categorically-unrelated words and the other, categorically-related words; both lists were presented in the participants' native language. In experiment 1, the words were strong category associates, and in experiment 2, the words were weak category associates. Participants recalled all the words they could remember, and the number of words recalled and degree of clustering by category were analyzed. As predicted, cultural differences emerged for the elderly, with East-Asians using categories less than Americans during recall of highly-associated category exemplars (experiment 1). For recall of low-associate exemplars, East-Asians overall categorized less than Americans (experiment 2). Surprisingly, these differences in the use of categories did not lead to cultural differences in the number of words recalled. The expected effects of age were apparent with elderly recalling less than young, but in contrast to previous studies, elderly also categorized less than young. These studies provide support for the notion that cultural differences in categorical organization are larger for elderly adults than young, although culture did not impact the amount recalled. These data suggest that culture and age interact to influence cognition.
The Influence of Flankers on Race Categorization of Faces
Sun, Hsin-Mei; Balas, Benjamin
2012-01-01
Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces affected the perception of a target face’s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone, or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect interacts with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context affect the perception of the target face by manipulating flanker faces’ shape and pigmentation as well as their orientation. The results showed that flanker faces’ shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces’ shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation. PMID:22825930
Challenging Cognitive Control by Mirrored Stimuli in Working Memory Matching
Wirth, Maria; Gaschler, Robert
2017-01-01
Cognitive conflict has often been investigated by placing automatic processing originating from learned associations in competition with instructed task demands. Here we explore whether mirror generalization as a congenital mechanism can be employed to create cognitive conflict. Past research suggests that the visual system automatically generates an invariant representation of visual objects and their mirrored counterparts (i.e., mirror generalization), and especially so for lateral reversals (e.g., a cup seen from the left side vs. right side). Prior work suggests that mirror generalization can be reduced or even overcome by learning (i.e., for those visual objects for which it is not appropriate, such as letters d and b). We, therefore, minimized prior practice on resolving conflicts involving mirror generalization by using kanji stimuli as non-verbal and unfamiliar material. In a 1-back task, participants had to check a stream of kanji stimuli for identical repetitions and avoid miss-categorizing mirror reversed stimuli as exact repetitions. Consistent with previous work, lateral reversals led to profound slowing of reaction times and lower accuracy in Experiment 1. Yet, different from previous reports suggesting that lateral reversals lead to stronger conflict, similar slowing for vertical and horizontal mirror transformations was observed in Experiment 2. Taken together, the results suggest that transformations of visual stimuli can be employed to challenge cognitive control in the 1-back task. PMID:28503160
The semantic category-based grouping in the Multiple Identity Tracking task.
Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao
2018-01-01
In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.
Spectrophotometric reading of EUCAST antifungal susceptibility testing of Aspergillus fumigatus.
Meletiadis, J; Leth Mortensen, K; Verweij, P E; Mouton, J W; Arendrup, M C
2017-02-01
Given the increasing number of antifungal drugs and the emergence of resistant Aspergillus isolates, objective, automated and high-throughput antifungal susceptibility testing is important. The EUCAST E.Def 9.3 reference method for MIC determination of Aspergillus species relies on visual reading. Spectrophotometric reading was not adopted because of concern that non-uniform filamentous growth might lead to unreliable and non-reproducible results. We therefore evaluated spectrophotometric reading for the determination of MICs of antifungal azoles against Aspergillus fumigatus. Eighty-eight clinical isolates of A. fumigatus were tested against four medical azoles (posaconazole, voriconazole, itraconazole, isavuconazole) and one agricultural azole (tebuconazole) with EUCAST E.Def 9.3. The visually determined MICs (complete inhibition of growth) were compared with spectrophotometrically determined MICs and essential (±1 twofold dilution) and categorical (susceptible/intermediate/resistant or wild-type/non-wild-type) agreement was calculated. Spectrophotometric data were analysed with regression analysis using the E max model, and the effective concentration corresponding to 5% (EC 5 ) was estimated. Using the 5% cut-off, high essential (92%-97%) and categorical (93%-99%) agreement (<6% errors) was found between spectrophotometric and visual MICs. The EC 5 also correlated with the visually determined MICs with an essential agreement of 83%-96% and a categorical agreement of 90%-100% (<5% errors). Spectrophotometric determination of MICs of antifungal drugs may increase objectivity, and allow automation and high-throughput of EUCAST E.Def 9.3 antifungal susceptibility testing of Aspergillus species. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Coggan, David D; Baker, Daniel H; Andrews, Timothy J
2016-01-01
Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged ∼80-100 ms after stimulus onset and were still evident when the stimulus was no longer present (∼800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged ∼80-100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (∼400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image.
ERIC Educational Resources Information Center
de la Rosa, Stephan; Choudhery, Rabia N.; Chatziastros, Astros
2011-01-01
Recent evidence suggests that the recognition of an object's presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly…
Discrimination of Complex Human Behavior by Pigeons (Columba livia) and Humans
Qadri, Muhammad A. J.; Sayde, Justin M.; Cook, Robert G.
2014-01-01
The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors (“martial arts” vs. “Indian dance”). Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species. PMID:25379777
Practice as Research: A Fine Art Contextual Study
ERIC Educational Resources Information Center
Adams, Suze
2014-01-01
This paper examines the dynamic interplay between practice and theory in practice-led research in the visual arts. Building on recent debate around the issue and following appropriately rigorous models, the importance of locating a suitable methodology to adequately reflect the integrated process of research practice in written as well as visual…
Fong, Allan; Harriott, Nicole; Walters, Donna M; Foley, Hanan; Morrissey, Richard; Ratwani, Raj R
2017-08-01
Many healthcare providers have implemented patient safety event reporting systems to better understand and improve patient safety. Reviewing and analyzing these reports is often time consuming and resource intensive because of both the quantity of reports and length of free-text descriptions in the reports. Natural language processing (NLP) experts collaborated with clinical experts on a patient safety committee to assist in the identification and analysis of medication related patient safety events. Different NLP algorithmic approaches were developed to identify four types of medication related patient safety events and the models were compared. Well performing NLP models were generated to categorize medication related events into pharmacy delivery delays, dispensing errors, Pyxis discrepancies, and prescriber errors with receiver operating characteristic areas under the curve of 0.96, 0.87, 0.96, and 0.81 respectively. We also found that modeling the brief without the resolution text generally improved model performance. These models were integrated into a dashboard visualization to support the patient safety committee review process. We demonstrate the capabilities of various NLP models and the use of two text inclusion strategies at categorizing medication related patient safety events. The NLP models and visualization could be used to improve the efficiency of patient safety event data review and analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Graded effects of regularity in language revealed by N400 indices of morphological priming.
Kielar, Aneta; Joanisse, Marc F
2010-07-01
Differential electrophysiological effects for regular and irregular linguistic forms have been used to support the theory that grammatical rules are encoded using a dedicated cognitive mechanism. The alternative hypothesis is that language systematicities are encoded probabilistically in a way that does not categorically distinguish rule-like and irregular forms. In the present study, this matter was investigated more closely by focusing specifically on whether the regular-irregular distinction in English past tenses is categorical or graded. We compared the ERP priming effects of regulars (baked-bake), vowel-change irregulars (sang-sing), and "suffixed" irregulars that display a partial regularity (suffixed irregular verbs, e.g., slept-sleep), as well as forms that are related strictly along formal or semantic dimensions. Participants performed a visual lexical decision task with either visual (Experiment 1) or auditory prime (Experiment 2). Stronger N400 priming effects were observed for regular than vowel-change irregular verbs, whereas suffixed irregulars tended to group with regular verbs. Subsequent analyses decomposed early versus late-going N400 priming, and suggested that differences among forms can be attributed to the orthographic similarity of prime and target. Effects of morphological relatedness were observed in the later-going time period, however, we failed to observe true regular-irregular dissociations in either experiment. The results indicate that morphological effects emerge from the interaction of orthographic, phonological, and semantic overlap between words.
Using Gemba Boards to Facilitate Evidence-Based Practice in Critical Care.
Bourgault, Annette M; Upvall, Michele J; Graham, Alison
2018-06-01
Tradition-based practices lack supporting research evidence and may be harmful or ineffective. Engagement of key stakeholders is a critical step toward facilitating evidence-based practice change. Gemba , derived from Japanese, refers to the real place where work is done. Gemba boards (visual management tools) appear to be an innovative method to engage stakeholders and facilitate evidence-based practice. To explore the use of gemba boards and gemba huddles to facilitate practice change. Twenty-two critical care nurses participated in interviews in this qualitative, descriptive study. Thematic analysis was used to code and categorize interview data. Two researchers reached consensus on coding and derived themes. Data were managed with qualitative analysis software. The code gemba occurred most frequently; a secondary analysis was performed to explore its impact on practice change. Four themes were derived from the gemba code: (1) facilitation of staff, leadership, and interdisciplinary communication, (2) transparency of outcome data, (3) solicitation of staff ideas and feedback, and (4) dissemination of practice changes. Gemba boards and gemba huddles became part of the organizational culture for promoting and disseminating evidence-based practices. Unit-based, publicly located gemba boards and huddles have become key components of evidence-based practice culture. Gemba is both a tool and a process to engage team members and the public to generate clinical questions and to plan, implement, and evaluate practice changes. Future research on the effectiveness of gemba boards to facilitate evidence-based practice is warranted. ©2018 American Association of Critical-Care Nurses.
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2010-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: 1) the categorical relationship between the target and the distracters and 2) the visual field in which the target was presented. Similar to controls, the RH patients were faster in detecting targets in the right visual field when the target and distracters had different color names compared to when their names were the same. This effect was absent in the LH patients, consistent with the hypothesis that injury to the left hemisphere handicaps the automatic activation of lexical codes. Moreover, the LH patients showed a reversed effect, such that the advantage of different target-distracter names was now evident for targets in the left visual field. This reversal may suggest a reorganization of the color lexicon in the right hemisphere following left hemisphere brain injury and/or the unmasking of a heightened right hemisphere sensitivity to color categories. PMID:21216454
Ma, Bosen; Wang, Xiaoyun; Li, Degao
2015-01-01
To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.
Vinken, Kasper; Van den Bergh, Gert; Vermaercke, Ben; Op de Beeck, Hans P.
2016-01-01
In recent years, the rodent has come forward as a candidate model for investigating higher level visual abilities such as object vision. This view has been backed up substantially by evidence from behavioral studies that show rats can be trained to express visual object recognition and categorization capabilities. However, almost no studies have investigated the functional properties of rodent extrastriate visual cortex using stimuli that target object vision, leaving a gap compared with the primate literature. Therefore, we recorded single-neuron responses along a proposed ventral pathway in rat visual cortex to investigate hallmarks of primate neural object representations such as preference for intact versus scrambled stimuli and category-selectivity. We presented natural movies containing a rat or no rat as well as their phase-scrambled versions. Population analyses showed increased dissociation in representations of natural versus scrambled stimuli along the targeted stream, but without a clear preference for natural stimuli. Along the measured cortical hierarchy the neural response seemed to be driven increasingly by features that are not V1-like and destroyed by phase-scrambling. However, there was no evidence for category selectivity for the rat versus nonrat distinction. Together, these findings provide insights about differences and commonalities between rodent and primate visual cortex. PMID:27146315
Feature saliency and feedback information interactively impact visual category learning
Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit
2015-01-01
Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404
Ye, Byoung Seok; Chin, Juhee; Kim, Seong Yoon; Lee, Jung-Sun; Kim, Eun-Joo; Lee, Yunhwan; Hong, Chang Hyung; Choi, Seong Hye; Park, Kyung Won; Ku, Bon D; Moon, So Young; Kim, SangYun; Han, Seol-Hee; Lee, Jae-Hong; Cheong, Hae-Kwan; Park, Sun Ah; Jeong, Jee Hyang; Na, Duk L; Seo, Sang Won
2015-01-01
We evaluate the longitudinal outcomes of amnestic mild cognitive impairment (aMCI) according to the modality of memory impairment involved. We recruited 788 aMCI patients and followed them up. aMCI patients were categorized into three groups according to the modality of memory impairment: Visual-aMCI, only visual memory impaired; Verbal-aMCI, only verbal memory impaired; and Both-aMCI, both visual and verbal memory impaired. Each aMCI group was further categorized according to the presence or absence of recognition failure. Risk of progression to dementia was compared with pooled logistic regression analyses while controlling for age, gender, education, and interval from baseline. Of the sample, 219 (27.8%) aMCI patients progressed to dementia. Compared to the Visual-aMCI group, Verbal-aMCI (OR = 1.98, 95% CI = 1.19-3.28, p = 0.009) and Both-aMCI (OR = 3.05, 95% CI = 1.97-4.71, p < 0.001) groups exhibited higher risks of progression to dementia. Memory recognition failure was associated with increased risk of progression to dementia only in the Visual-aMCI group, but not in the Verbal-aMCI and Both-aMCI groups. The Visual-aMCI without recognition failure group were subcategorized into aMCI with depression, small vessel disease, or accelerated aging, and these subgroups showed a variety of progression rates. Our findings underlined the importance of heterogeneous longitudinal outcomes of aMCI, especially Visual-aMCI, for designing and interpreting future treatment trials in aMCI.
Metacognitive deficits in categorization tasks in a population with impaired inner speech.
Langland-Hassan, Peter; Gauker, Christopher; Richardson, Michael J; Dietz, Aimee; Faries, Frank R
2017-11-01
This study examines the relation of language use to a person's ability to perform categorization tasks and to assess their own abilities in those categorization tasks. A silent rhyming task was used to confirm that a group of people with post-stroke aphasia (PWA) had corresponding covert language production (or "inner speech") impairments. The performance of the PWA was then compared to that of age- and education-matched healthy controls on three kinds of categorization tasks and on metacognitive self-assessments of their performance on those tasks. The PWA showed no deficits in their ability to categorize objects for any of the three trial types (visual, thematic, and categorial). However, on the categorial trials, their metacognitive assessments of whether they had categorized correctly were less reliable than those of the control group. The categorial trials were distinguished from the others by the fact that the categorization could not be based on some immediately perceptible feature or on the objects' being found together in a type of scenario or setting. This result offers preliminary evidence for a link between covert language use and a specific form of metacognition. Copyright © 2017 Elsevier B.V. All rights reserved.
To Sum or Not to Sum: Taxometric Analysis with Ordered Categorical Assessment Items
ERIC Educational Resources Information Center
Walters, Glenn D.; Ruscio, John
2009-01-01
Meehl's taxometric method has been shown to differentiate between categorical and dimensional data, but there are many ways to implement taxometric procedures. When analyzing the ordered categorical data typically provided by assessment instruments, summing items to form input indicators has been a popular practice for more than 20 years. A Monte…
Vanmarcke, Steven; Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.
Harel, Assaf; Bentin, Shlomo
2013-01-01
A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. "car") and subordinate level (e.g. "Japanese car") identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information.
Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur. PMID:27803794
Harel, Assaf; Bentin, Shlomo
2013-01-01
A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. “car”) and subordinate level (e.g. “Japanese car”) identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information. PMID:23826188
Miniature Brain Decision Making in Complex Visual Environments
2008-07-18
release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The grantee investigated, using the honeybee ( Apis mellifera ) as a model...successful for understanding face processing in both human adults and infants. Individual honeybees ( Apis mellifera ) were trained with...for 30 bees (group 3) of the target stimuli. Bernard J, Stach S, Giurfa M (2007) Categorization of visual stimuli in the honeybee Apis mellifera
ERIC Educational Resources Information Center
Grané, Aurea; Romera, Rosario
2018-01-01
Survey data are usually of mixed type (quantitative, multistate categorical, and/or binary variables). Multidimensional scaling (MDS) is one of the most extended methodologies to visualize the profile structure of the data. Since the past 60s, MDS methods have been introduced in the literature, initially in publications in the psychometrics area.…
ERIC Educational Resources Information Center
Bowers, Jeffrey S.; Davis, Colin J.; Hanley, Derek A.
2005-01-01
We assessed the impact of visual similarity on written word identification by having participants learn new words (e.g. BANARA) that were neighbours of familiar words that previously had no neighbours (e.g. BANANA). Repeated exposure to these new words made it more difficult to semantically categorize the familiar words. There was some evidence of…
Rapid recalibration of speech perception after experiencing the McGurk illusion.
Lüttke, Claudia S; Pérez-Bellido, Alexis; de Lange, Floris P
2018-03-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Complementary and alternative medicine in multiple sclerosis
Moawad, Heidi; Shepard, Katie M.; Satya-Murti, Saty
2015-01-01
Summary This article identifies payment policy perspectives of the American Academy of Neurology's guideline on complementary and alternative medicine (CAM) in multiple sclerosis (MS). The guideline is a reliable repository of information for advocating or not recommending certain CAM treatments in MS. It eases the burden of searching for information on each separate CAM treatment. It frequently emphasizes the need for patient counseling. To provide such generally undervalued, but needed, cognitive services, neurologists could use advanced practice providers and patient-friendly visual aids during or between visits. They should also rely on evaluation and management codes that recognize time spent predominantly on counseling or coordination of care. The guideline's categorization of probable effectiveness of certain therapies will not influence coverage decisions because payers do not generally cover CAM therapies. PMID:29443184
Collet, Anne-Claire; Fize, Denis; VanRullen, Rufin
2015-01-01
Rapid visual categorization is a crucial ability for survival of many animal species, including monkeys and humans. In real conditions, objects (either animate or inanimate) are never isolated but embedded in a complex background made of multiple elements. It has been shown in humans and monkeys that the contextual background can either enhance or impair object categorization, depending on context/object congruency (for example, an animal in a natural vs. man-made environment). Moreover, a scene is not only a collection of objects; it also has global physical features (i.e phase and amplitude of Fourier spatial frequencies) which help define its gist. In our experiment, we aimed to explore and compare the contribution of the amplitude spectrum of scenes in the context-object congruency effect in monkeys and humans. We designed a rapid visual categorization task, Animal versus Non-Animal, using as contexts both real scenes photographs and noisy backgrounds built from the amplitude spectrum of real scenes but with randomized phase spectrum. We showed that even if the contextual congruency effect was comparable in both species when the context was a real scene, it differed when the foreground object was surrounded by a noisy background: in monkeys we found a similar congruency effect in both conditions, but in humans the congruency effect was absent (or even reversed) when the context was a noisy background. PMID:26207915
Jensen, Greg; Terrace, Herbert
2017-01-01
Humans are highly adept at categorizing visual stimuli, but studies of human categorization are typically validated by verbal reports. This makes it difficult to perform comparative studies of categorization using non-human animals. Interpretation of comparative studies is further complicated by the possibility that animal performance may merely reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. To assess and compare how humans and monkeys classified visual stimuli, we trained 7 rhesus macaques and 41 human volunteers to respond, in a specific order, to four simultaneously presented stimuli at a time, each belonging to a different perceptual category. These exemplars were drawn at random from large banks of images, such that the stimuli presented changed on every trial. Subjects nevertheless identified and ordered these changing stimuli correctly. Three monkeys learned to order naturalistic photographs; four others, close-up sections of paintings with distinctive styles. Humans learned to order both types of stimuli. All subjects classified stimuli at levels substantially greater than that predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than the photographic stimuli. This points to a common classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories. PMID:28961270
Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian
2018-04-18
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.
Algorithm Visualization in Teaching Practice
ERIC Educational Resources Information Center
Törley, Gábor
2014-01-01
This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.
Sjövall, Johanna; Bitzén, Ulrika; Kjellén, Elisabeth; Nilsson, Per; Wahlberg, Peter; Brun, Eva
2016-04-01
The aim of this study was to determine whether PET scans after radiotherapy (RT), visually interpreted as equivocal regarding metabolic neck node response can be used to accurately categorize patients as responders or nonresponders using a Likert scale and/or maximum standardized uptake value (SUVmax). Other aims were to determine the performance of different methods for assessing post-RT PET scans (visual inspection, a Likert scale and SUVmax) and to establish whether any method is superior in predicting regional control (RC) and overall survival (OS). In 105 patients with neck node-positive head and neck cancer, the neck node response was evaluated by FDG PET/CT 6 weeks after RT. The scans were clinically assessed by visual inspection and, for the purposes of this analysis, re-evaluated using the Deauville criteria, a five-point Likert scale previously used in lymphoma studies. In addition, SUVmax was determined. All assessment methods were able to significantly predict RC but not OS. The methods were also able to significantly predict remission of tumour after completion of RT. Of the 105 PET scans, 19 were judged as equivocal on visual inspection. The Likert scale was preferable to SUVmax for grouping patients as responders or nonresponders. All methods (visual inspection, SUVmax and the Likert scale) identified responders and nonresponders and predicted RC. A Likert scale is a promising tool to reduce to a minimum the problem of PET scans judged as equivocal. Consensus regarding qualitative assessment would facilitate PET reporting in clinical practice.
Physician Dual Practice: A Descriptive Mapping Review of Literature.
Moghri, Javad; Arab, Mohammad; Rashidian, Arash; Akbari Sari, Ali
2016-03-01
Physician dual practice is a common phenomenon in almost all countries throughout the world, which could potential impacts on access, equity and quality of services. This paper aims to review studies in physician dual practice and categorize them in order to their main objectives and purposes. Comprehensive literature searches were undertaken in order to obtain main papers and documents in the field of physician dual practice. Systematic searches in Medline and Embase from 1960 to 2013, and general searches in some popular search engines were carried out in this way. After that, descriptive mapping review methods were utilized to categorize eligible studies in this area. The searches obtained 404 titles, of which 81 full texts were assessed. Finally, 24 studies were eligible for inclusion in our review. These studies were categorized into four groups - "motivation and forces behind dual practice", "consequences of dual practice", "dual practice Policies and their impacts", and "other studies" - based on their main objectives. Our findings showed a dearth of scientifically reliable literature in some areas of dual practice, like the prevalence of the phenomenon, the real consequences of it, and the impacts of the implemented policy measures. Rigorous empirical and evaluative studies should be designed to detect the real consequences of DP and assess the effects of interventions and regulations, which governments have implemented in this field.
ERIC Educational Resources Information Center
Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus
2012-01-01
The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…
Grosu, Horiana B; Vial-Rodriguez, Macarena; Vakil, Erik; Casal, Roberto F; Eapen, George A; Morice, Rodolfo; Stewart, John; Sarkiss, Mona G; Ost, David E
2017-08-01
During diagnostic thoracoscopy, talc pleurodesis after biopsy is appropriate if the probability of malignancy is sufficiently high. Findings on direct visual assessment of the pleura during thoracoscopy, rapid onsite evaluation (ROSE) of touch preparations (touch preps) of thoracoscopic biopsy specimens, and preoperative imaging may help predict the likelihood of malignancy; however, data on the performance of these methods are limited. To assess the performance of ROSE of touch preps, direct visual assessment of the pleura during thoracoscopy, and preoperative imaging in diagnosing malignancy. Patients who underwent ROSE of touch preps during thoracoscopy for suspected malignancy were retrospectively reviewed. Malignancy was diagnosed on the basis of final pathologic examination of pleural biopsy specimens. ROSE results were categorized as malignant, benign, or atypical cells. Visual assessment results were categorized as tumor studding present or absent. Positron emission tomography (PET) and computed tomography (CT) findings were categorized as abnormal or normal pleura. Likelihood ratios were calculated for each category of test result. The study included 44 patients, 26 (59%) with a final pathologic diagnosis of malignancy. Likelihood ratios were as follows: for ROSE of touch preps: malignant, 1.97 (95% confidence interval [CI], 0.90-4.34); atypical cells, 0.69 (95% CI, 0.21-2.27); benign, 0.11 (95% CI, 0.01-0.93); for direct visual assessment: tumor studding present, 3.63 (95% CI, 1.32-9.99); tumor studding absent, 0.24 (95% CI, 0.09-0.64); for PET: abnormal pleura, 9.39 (95% CI, 1.42-62); normal pleura, 0.24 (95% CI, 0.11-0.52); and for CT: abnormal pleura, 13.15 (95% CI, 1.93-89.63); normal pleura, 0.28 (95% CI, 0.15-0.54). A finding of no malignant cells on ROSE of touch preps during thoracoscopy lowers the likelihood of malignancy significantly, whereas finding of tumor studding on direct visual assessment during thoracoscopy only moderately increases the likelihood of malignancy. A positive finding on PET and/or CT increases the likelihood of malignancy significantly in a moderate-risk patient group and can be used as an adjunct to predict malignancy before pleurodesis.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Gary H. Elsner
1979-01-01
Computers can analyze and help to plan the visual aspects of large wildland landscapes. This paper categorizes and explains current computer methods available. It also contains a futuristic dialogue between a landscape architect and a computer.
Iconic memory requires attention
Persuh, Marjan; Genzer, Boris; Melara, Robert D.
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389
Iconic memory requires attention.
Persuh, Marjan; Genzer, Boris; Melara, Robert D
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.
van Weelden, Lisanne; Schilperoord, Joost; Swerts, Marc; Pecher, Diane
2015-01-01
Visual information contributes fundamentally to the process of object categorization. The present study investigated whether the degree of activation of visual information in this process is dependent on the contextual relevance of this information. We used the Proactive Interference (PI-release) paradigm. In four experiments, we manipulated the information by which objects could be categorized and subsequently be retrieved from memory. The pattern of PI-release showed that if objects could be stored and retrieved both by (non-perceptual) semantic and (perceptual) shape information, then shape information was overruled by semantic information. If, however, semantic information could not be (satisfactorily) used to store and retrieve objects, then objects were stored in memory in terms of their shape. The latter effect was found to be strongest for objects from identical semantic categories.
Emadi, Nazli; Rajimehr, Reza; Esteky, Hossein
2014-01-01
Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (<8 Hz) oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance. PMID:25404900
Modulation of microsaccades by spatial frequency during object categorization.
Craddock, Matt; Oppermann, Frank; Müller, Matthias M; Martinovic, Jasna
2017-01-01
The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature - the presence of an object - and by low-level stimulus characteristics - spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Perceptual Learning Selectively Refines Orientation Representations in Early Visual Cortex
Jehee, Janneke F.M.; Ling, Sam; Swisher, Jascha D.; van Bergen, Ruben S.; Tong, Frank
2013-01-01
Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily one-hour training sessions. Training on average led to a two-fold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1–V4) using signal detection measures, both pre- and post-training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2–V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information. PMID:23175828
Perceptual learning selectively refines orientation representations in early visual cortex.
Jehee, Janneke F M; Ling, Sam; Swisher, Jascha D; van Bergen, Ruben S; Tong, Frank
2012-11-21
Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily 1 h training sessions. Training on average led to a twofold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1-V4) using signal detection measures, both before and after training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2-V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information.
Rapid Presentation of Emotional Expressions Reveals New Emotional Impairments in Tourette’s Syndrome
Mermillod, Martial; Devaux, Damien; Derost, Philippe; Rieu, Isabelle; Chambres, Patrick; Auxiette, Catherine; Legrand, Guillaume; Galland, Fabienne; Dalens, Hélène; Coulangeon, Louise Marie; Broussolle, Emmanuel; Durif, Franck; Jalenques, Isabelle
2013-01-01
Objective: Based on a variety of empirical evidence obtained within the theoretical framework of embodiment theory, we considered it likely that motor disorders in Tourette’s syndrome (TS) would have emotional consequences for TS patients. However, previous research using emotional facial categorization tasks suggests that these consequences are limited to TS patients with obsessive-compulsive behaviors (OCB). Method: These studies used long stimulus presentations which allowed the participants to categorize the different emotional facial expressions (EFEs) on the basis of a perceptual analysis that might potentially hide a lack of emotional feeling for certain emotions. In order to reduce this perceptual bias, we used a rapid visual presentation procedure. Results: Using this new experimental method, we revealed different and surprising impairments on several EFEs in TS patients compared to matched healthy control participants. Moreover, a spatial frequency analysis of the visual signal processed by the patients suggests that these impairments may be located at a cortical level. Conclusion: The current study indicates that the rapid visual presentation paradigm makes it possible to identify various potential emotional disorders that were not revealed by the standard visual presentation procedures previously reported in the literature. Moreover, the spatial frequency analysis performed in our study suggests that emotional deficit in TS might lie at the level of temporal cortical areas dedicated to the processing of HSF visual information. PMID:23630481
Constable, Merryn D; Becker, Stefanie I
2017-10-01
According to the Sapir-Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect-namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., different hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined of an early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization of the right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range of previously discrepant findings.
Orthographic versus semantic matching in visual search for words within lists.
Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas
2012-03-01
An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.
Smile line assessment comparing quantitative measurement and visual estimation.
Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie
2011-02-01
Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Detecting and reacting to change: the effect of exposure to narrow categorizations.
Chakravarti, Amitav; Fang, Christina; Shapira, Zur
2011-11-01
The ability to detect a change, to accurately assess the magnitude of the change, and to react to that change in a commensurate fashion are of critical importance in many decision domains. Thus, it is important to understand the factors that systematically affect people's reactions to change. In this article we document a novel effect: decision makers' reactions to a change (e.g., a visual change, a technology change) were systematically affected by the type of categorizations they encountered in an unrelated prior task (e.g., the response categories associated with a survey question). We found that prior exposure to narrow, as opposed to broad, categorizations improved decision makers' ability to detect change and led to stronger reactions to a given change. These differential reactions occurred because the prior categorizations, even though unrelated, altered the extent to which the subsequently presented change was perceived as either a relatively large change or a relatively small one.
Fleischhauer, Monika; Strobel, Alexander; Diers, Kersten; Enge, Sören
2014-02-01
The Implicit Association Test (IAT) is a widely used latency-based categorization task that indirectly measures the strength of automatic associations between target and attribute concepts. So far, little is known about the perceptual and cognitive processes underlying personality IATs. Thus, the present study examined event-related potential indices during the execution of an IAT measuring neuroticism (N = 70). The IAT effect was strongly modulated by the P1 component indicating early facilitation of relevant visual input and by a P3b-like late positive component reflecting the efficacy of stimulus categorization. Both components covaried, and larger amplitudes led to faster responses. The results suggest a relationship between early perceptual and semantic processes operating at a more automatic, implicit level and later decision-related categorization of self-relevant stimuli contributing to the IAT effect. Copyright © 2013 Society for Psychophysiological Research.
Seeing your way to health: the visual pedagogy of Bess Mensendieck's physical culture system.
Veder, Robin
2011-01-01
This essay examines the images and looking practices central to Bess M. Mensendieck's (c.1866-1959) 'functional exercise' system, as documented in physical culture treatises published in Germany and the United States between 1906 and 1937. Believing that muscular realignment could not occur without seeing how the body worked, Mensendieck taught adult non-athletes to see skeletal alignment and muscular movement in their own and others' bodies. Three levels of looking practices are examined: didactic sequences; penetrating inspection and appreciation of physiological structures; and ideokinetic visual metaphors for guiding movement. With these techniques, Mensendieck's work bridged the body cultures of German Nacktkultur (nudism), American labour efficiency and the emerging physical education profession. This case study demonstrates how sport historians could expand their analyses to include practices of looking as well as questions of visual representation.
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T.
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms. PMID:26733895
Thomas, Anthony; Eichenberger, Gary; Kempton, Curtis; Pape, Darin; York, Sarah; Decker, Ann Marie; Kohia, Mohamed
2009-01-01
This literature review is to evaluate current research articles pertinent to physical therapy treatment of osteoarthritis (OA) of the knee. Osteoarthritis of the knee is an increasingly common diagnosis, with a prognosis that can lead to loss in an individual's functional abilities. Literature on the subject of OA and its physical therapy treatment is vast and current, however, obtaining and analyzing it can be time consuming and costly to a Physical Therapist. The primary aim of this paper is to review current trends for treatment of OA of the knee, and to compare each intervention for effectiveness. This article provides a systematic categorization as well as recommendations for physical therapists based on current (1996 or sooner) literature. Twenty-two articles were located using various online databases, critically analyzed, and categorized using Sackett's levels of evidence. Recommendations for the treatment of OA of the knee by a physical therapist were then made. Two grade A recommendations, 5 grade B recommendation, and 2 grade C recommendations were made from the categorization of the articles. This article also contains recommendations outside the scope of a therapist's practice, which a physical therapist could consider when treating a patient with knee osteoarthritis. Further research recommendations are also provided.
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
The relationship between visual search and categorization of own- and other-age faces.
Craig, Belinda M; Lipp, Ottmar V
2018-03-13
Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.
Unal, Melih H; Aydin, Ali; Sonmez, Murat; Ayata, Ali; Ersanli, Dilaver
2008-01-01
To evaluate the prognostic value of the Ocular Trauma Score (OTS) in cases of deadly weapon-related open-globe injuries with intraocular foreign bodies. A retrospective, interventional case series included 20 eyes of 20 patients who had deadly weapon-related open-globe injuries with intraocular foreign bodies. The OTS was calculated for each patient by adding the determined numbers of OTS variables at presentation (initial visual acuity, rupture, endophthalmitis, perforating injury, retinal detachment, and afferent pupillary defect). Patients were categorized based on their score (category 1 through 5). Final visual acuities in the OTS categories were calculated and compared to those in OTS study group. No statistically significant difference was found between the categorical distributions of the study patients and those in the OTS study group. No patient in the study was in category 5. The OTS, which was designed to predict visual outcomes of general ocular trauma, may also provide reliable information about the prognosis of deadly weapon-related open-globe injuries with intraocular foreign bodies.
Instance-based categorization: automatic versus intentional forms of retrieval.
Neal, A; Hesketh, B; Andrews, S
1995-03-01
Two experiments are reported which attempt to disentangle the relative contribution of intentional and automatic forms of retrieval to instance-based categorization. A financial decision-making task was used in which subjects had to decide whether a bank would approve loans for a series of applicants. Experiment 1 found that categorization was sensitive to instance-specific knowledge, even when subjects had practiced using a simple rule. L. L. Jacoby's (1991) process-dissociation procedure was adapted for use in Experiment 2 to infer the relative contribution of intentional and automatic retrieval processes to categorization decisions. The results provided (1) strong evidence that intentional retrieval processes influence categorization, and (2) some preliminary evidence suggesting that automatic retrieval processes may also contribute to categorization decisions.
Risk based meat hygiene--examples on food chain information and visual meat inspection.
Ellerbroek, Lüppo
2007-08-01
The Regulation (EC) No. 854/2004 refers in particularly to a broad spectrum of official supervision of products of animal origin. The supervision should be based on the most current relevant information which is available. The proposal presented includes forms for implementation of Food Chain Information (FCI) as laid down in the Regulation (EC) No. 853/2004 as well as suggestions for the implementation of the visual inspection and criteria for a practical approach to assist the competent authority and the business operator. These suggestions are summarised in two forms including the FCI and practical information from the farm of origin as well as from the slaughterhouse needed for the competent authority to permit the visual meat inspection of fattening pigs. The requested information from the farm level include i.e. an animal loss rate during the fattening period and diagnostic findings of carcasses and respectively on organs of the animals during meat inspection procedure. The evaluation of findings in liver and pleura at the slaughterhouse level indicate a correlation to the general health status of the fattening conditions at farm level. The criteria developed are in principle suited for a "gold standard" of the health status of fattening pigs and the criteria may serve as practical implementation of the term Good Farming Practice in relation to consumer protection. Only for fattening pigs deriving from farms fulfilling this "gold standard" the visual meat inspection shall be permitted. It is proposed to integrate the two forms for FCI and additional information for authorisation of visual meat inspection for fattening pigs in the working document of the DG SANCO titled "Commission regulation of laying down specific rules on official controls for the inspection of meat" (SANCO/2696/2006).
In vivo spectral micro-imaging of tissue
Demos, Stavros G; Urayama, Shiro; Lin, Bevin; Saroufeem, Ramez; Ghobrial, Moussa
2012-11-27
In vivo endoscopic methods an apparatuses for implementation of fluorescence and autofluorescence microscopy, with and without the use of exogenous agents, effectively (with resolution sufficient to image nuclei) visualize and categorize various abnormal tissue forms.
Shimp, Charles P
2004-06-30
Research on categorization has changed over time, and some of these changes resemble how Wittgenstein's views changed from his Tractatus Logico-Philosophicus to his Philosophical Investigations. Wittgenstein initially focused on unambiguous, abstract, parsimonious, logical propositions and rules, and on independent, static, "atomic facts." This approach subsequently influenced the development of logical positivism and thereby may have indirectly influenced method and theory in research on categorization: much animal research on categorization has focused on learning simple, static, logical rules unambiguously interrelating small numbers of independent features. He later rejected logical simplicity and rigor and focused instead on Gestalt ideas about figure-ground reversals and context, the ambiguity of family resemblance, and the function of details of everyday language. Contemporary contextualism has been influenced by this latter position, some features of which appear in contemporary empirical research on categorization. These developmental changes are illustrated by research on avian local and global levels of visual perceptual analysis, categorization of rectangles and moving objects, and artificial grammar learning. Implications are described for peer review of quantitative theory in which ambiguity, logical rigor, simplicity, or dynamics are designed to play important roles.
Bangdiwala, Shrikant I
2017-01-01
When studying the agreement between two observers rating the same n units into the same k discrete ordinal categories, Bangdiwala (1985) proposed using the "agreement chart" to visually assess agreement. This article proposes that often it is more interesting to focus on the patterns of disagreement and visually understanding the departures from perfect agreement. The article reviews the use of graphical techniques for descriptively assessing agreement and disagreements, and also reviews some of the available summary statistics that quantify such relationships.
Rapid recalibration of speech perception after experiencing the McGurk illusion
Pérez-Bellido, Alexis; de Lange, Floris P.
2018-01-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as ‘ada’). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as ‘ada’. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to ‘ada’ (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization. PMID:29657743
ERIC Educational Resources Information Center
Aleman, Cheryl; And Others
1990-01-01
Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Lin, Nan; Guo, Qihao; Han, Zaizhu; Bi, Yanchao
2011-11-01
Neuropsychological and neuroimaging studies have indicated that motor knowledge is one potential dimension along which concepts are organized. Here we present further direct evidence for the effects of motor knowledge in accounting for categorical patterns across object domains (living vs. nonliving) and grammatical domains (nouns vs. verbs), as well as the integrity of other modality-specific knowledge (e.g., visual). We present a Chinese case, XRK, who suffered from semantic dementia with left temporal lobe atrophy. In naming and comprehension tasks, he performed better at nonliving items than at living items, and better at verbs than at nouns. Critically, multiple regression method revealed that these two categorical effects could be both accounted for by the charade rating, a continuous measurement of the significance of motor knowledge for a concept or a semantic feature. Furthermore, charade rating also predicted his performances on the generation frequency of semantic features of various modalities. These findings consolidate the significance of motor knowledge in conceptual organization and further highlights the interactions between different types of semantic knowledge. Copyright © 2010 Elsevier Inc. All rights reserved.
General Analytical Schemes for the Characterization of Pectin-Based Edible Gelled Systems
Haghighi, Maryam; Rezaei, Karamatollah
2012-01-01
Pectin-based gelled systems have gained increasing attention for the design of newly developed food products. For this reason, the characterization of such formulas is a necessity in order to present scientific data and to introduce an appropriate finished product to the industry. Various analytical techniques are available for the evaluation of the systems formulated on the basis of pectin and the designed gel. In this paper, general analytical approaches for the characterization of pectin-based gelled systems were categorized into several subsections including physicochemical analysis, visual observation, textural/rheological measurement, microstructural image characterization, and psychorheological evaluation. Three-dimensional trials to assess correlations among microstructure, texture, and taste were also discussed. Practical examples of advanced objective techniques including experimental setups for small and large deformation rheological measurements and microstructural image analysis were presented in more details. PMID:22645484
Common data manipulations with R in biological researches
Liu, Qin
2017-01-01
R is a computer language and has been widely used in science community due to the powerful capability in data analysis and visualization; and these functions are mainly provided by the developed packages. Because every package has strict format definitions on the inputted data, it is always required to appropriately manipulate the original data in advance. Unfortunately, users, especially for the beginners, are always confused by the extreme flexibility with R in data manipulation. In the present paper, we roughly categorize the common manipulations with R for biological data into four classes, including overview of data, transformation, summarization, and reshaping. Subsequently, these manipulations are exemplified in a sample data of clinical records of diabetic patients. Our main purpose is to provide a better landscape on the data manipulation with R and hence facilitate the practical applications in biological researches. PMID:28840022
Care maps for children with medical complexity.
Adams, Sherri; Nicholas, David; Mahant, Sanjay; Weiser, Natalie; Kanani, Ronik; Boydell, Katherine; Cohen, Eyal
2017-12-01
Children with medical complexity require multiple providers and services to keep them well and at home. A care map is a patient/family-created diagram that pictorially maps out this complex web of services. This study explored what care maps mean for families and healthcare providers to inform potential for clinical use. Parents (n=15) created care maps (hand drawn n=10 and computer-generated n=5) and participated in semi-structured interviews about the process of developing care maps and their perceived impact. Healthcare providers (n=30) reviewed the parent-created care maps and participated in semi-structured interviews. Data were analysed for themes and emerging theory using a grounded theory analytical approach. Data analysis revealed 13 overarching themes that were further categorized into three domains: features (characteristics of care maps), functions (what care maps do), and emerging outcomes (benefits of care map use). These domains further informed a definition and a theoretical model of how care maps work. Our findings suggest that care maps may be a way of supporting patient- and family-centred care by graphically identifying and integrating experiences of the family as well as priorities for moving forward. Care maps were endorsed as a useful tool by families and providers. They help healthcare providers better understand parental priorities for care. Parents can create care maps to demonstrate the complex burden of care. They are a unique visual way to incorporate narrative medicine into practice. © 2017 Mac Keith Press.
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
Paleolithic art and cognition.
Halverson, J
1992-05-01
In this article, I have explored some of the possible relationships between the first appearance of representational art in human history and the early development of human cognition. I argue that most Upper Paleolithic depictions directly represent generalized mental images of their animal subjects rather than percepts or recollected scenes from life and that these images, in turn, are representations of concepts at the basic level of categorization. A common feature of Paleolithic art forms is the salience of parts, and the treatment of parts indicates analytic and synthetic (recombinative) abilities. There are some indications of superordinate categorization. An expansion of conceptual thinking seems to be implied as well as the beginnings of operational thought. The presence and practice of depiction may have had the effect of bringing concepts into consciousness and thus inducing reflection in at least a partially abstract mode.
The influence of print exposure on the body-object interaction effect in visual word recognition.
Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M
2012-01-01
We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.
Doors for memory: A searchable database.
Baddeley, Alan D; Hitch, Graham J; Quinlan, Philip T; Bowes, Lindsey; Stone, Rob
2016-11-01
The study of human long-term memory has for over 50 years been dominated by research on words. This is partly due to lack of suitable nonverbal materials. Experience in developing a clinical test suggested that door scenes can provide an ecologically relevant and sensitive alternative to the faces and geometrical figures traditionally used to study visual memory. In pursuing this line of research, we have accumulated over 2000 door scenes providing a database that is categorized on a range of variables including building type, colour, age, condition, glazing, and a range of other physical characteristics. We describe an illustrative study of recognition memory for 100 doors tested by yes/no, two-alternative, or four-alternative forced-choice paradigms. These stimuli, together with the full categorized database, are available through a dedicated website. We suggest that door scenes provide an ecologically relevant and participant-friendly source of material for studying the comparatively neglected field of visual long-term memory.
Student Researchers in the Middle: Using Visual Images to Make Sense of Inclusive Education
ERIC Educational Resources Information Center
Moss, Julianne; Deppeler, Joanne; Astley, Lesley; Pattison, Kevin
2007-01-01
Using "visual narrative" theoretically and practically, this paper explores issues of inclusive education, during a period of curriculum reform and renewal in Australia. In Australia, the middle years of schooling, Years 5 to 9, are well researched and known as a period when students disengage with learning and participation in…
Williams, Jennifer Stewart
2011-07-01
To show how fractional polynomial methods can usefully replace the practice of arbitrarily categorizing data in epidemiology and health services research. A health service setting is used to illustrate a structured and transparent way of representing non-linear data without arbitrary grouping. When age is a regressor its effects on an outcome will be interpreted differently depending upon the placing of cutpoints or the use of a polynomial transformation. Although it is common practice, categorization comes at a cost. Information is lost, and accuracy and statistical power reduced, leading to spurious statistical interpretation of the data. The fractional polynomial method is widely supported by statistical software programs, and deserves greater attention and use.
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke
2017-01-01
Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features other than object parts perform relatively poorly, perhaps because DNNs more comprehensively capture the colors, textures and contours which matter to human object perception. However, categorical models outperform DNNs, suggesting that further work may be needed to bring high-level semantic representations in DNNs closer to those extracted by humans. Modern DNNs explain similarity judgments remarkably well considering they were not trained on this task, and are promising models for many aspects of human cognition. PMID:29062291
Joyce Clifford the Scholar: In Her Own Words.
Fulmer, Terry; Gibbons, M Patricia
2015-01-01
Dr Joyce H. Clifford was world renowned for her excellence in nursing administration and leadership. The purpose of this article is to examine her complete body of published scholarship and analyze her papers as a method of understanding her intellectual progression as a leader in the discipline, as well as to document how her conceptualization of professional practice and the practice environment advanced nursing practice and patient and family care. Using the qualitative method of narrative inquiry, a systematic analysis of her papers was conducted to describe the evolution of her scholarship and her impact on the discipline and patient care. We reviewed all known existing papers, categorized them into 3 stages, and discuss them here. Using quotes from her work, we have added her voice to the compelling professional practice issues she addressed in her lifetime.
Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris
2013-10-08
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.
Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris
2013-01-01
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460
Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.
Liu, Xiang-Yun; Zhang, Jun-Yun
2017-08-04
Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.
Blue-green color categorization in Mandarin-English speakers.
Wuerger, Sophie; Xiao, Kaida; Mylonas, Dimitris; Huang, Qingmei; Karatzas, Dimosthenis; Hird, Emily; Paramei, Galina
2012-02-01
Observers are faster to detect a target among a set of distracters if the targets and distracters come from different color categories. This cross-boundary advantage seems to be limited to the right visual field, which is consistent with the dominance of the left hemisphere for language processing [Gilbert et al., Proc. Natl. Acad. Sci. USA 103, 489 (2006)]. Here we study whether a similar visual field advantage is found in the color identification task in speakers of Mandarin, a language that uses a logographic system. Forty late Mandarin-English bilinguals performed a blue-green color categorization task, in a blocked design, in their first language (L1: Mandarin) or second language (L2: English). Eleven color singletons ranging from blue to green were presented for 160 ms, randomly in the left visual field (LVF) or right visual field (RVF). Color boundary and reaction times (RTs) at the color boundary were estimated in L1 and L2, for both visual fields. We found that the color boundary did not differ between the languages; RTs at the color boundary, however, were on average more than 100 ms shorter in the English compared to the Mandarin sessions, but only when the stimuli were presented in the RVF. The finding may be explained by the script nature of the two languages: Mandarin logographic characters are analyzed visuospatially in the right hemisphere, which conceivably facilitates identification of color presented to the LVF. © 2012 Optical Society of America
Individual differences in spatial relation processing: effects of strategy, ability, and gender
van der Ham, Ineke J. M.; Borst, Gregoire
2011-01-01
Numerous studies have focused on the distinction between categorical and coordinate spatial relations. Categorical relations are propositional and abstract, and often related to a left hemisphere advantage. Coordinate relations specify the metric information of the relative locations of objects, and can be linked to right hemisphere processing. Yet, not all studies have reported such a clear double dissociation; in particular the categorical left hemisphere advantage is not always reported. In the current study we investigated whether verbal and spatial strategies, verbal and spatial cognitive abilities, and gender could account for the discrepancies observed in hemispheric lateralization of spatial relations. Seventy-five participants performed two visual half field, match-to-sample tasks (Van der Ham et al., 2007; 2009) to study the lateralization of categorical and coordinate relation processing. For each participant we determined the strategy they used in each of the two tasks. Consistent with previous findings, we found an overall categorical left hemisphere advantage and coordinate right hemisphere advantage. The lateralization pattern was affected selectively by the degree to which participants used a spatial strategy and by none of the other variables (i.e., verbal strategy, cognitive abilities, and gender). Critically, the categorical left hemisphere advantage was observed only for participants that relied strongly on a spatial strategy. This result is another piece of evidence that categorical spatial relation processing relies on spatial and not verbal processes. PMID:21353361
User-Centered Evaluation of Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean C.
Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference betweenmore » usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more suitable to visual analytics. A history of analysis and analysis techniques and problems is provided as well as an introduction to user-centered evaluation and various evaluation techniques for readers from different disciplines. The understanding of these techniques is imperative if we wish to support analysis in the visual analytics software we develop. Currently the evaluations that are conducted and published for visual analytics software are very informal and consist mainly of comments from users or potential users. Our goal is to help researchers in visual analytics to conduct more formal user-centered evaluations. While these are time-consuming and expensive to carryout, the outcomes of these studies will have a defining impact on the field of visual analytics and help point the direction for future features and visualizations to incorporate. While many researchers view work in user-centered evaluation as a less-than-exciting area to work, the opposite is true. First of all, the goal is user-centered evaluation is to help visual analytics software developers, researchers, and designers improve their solutions and discover creative ways to better accommodate their users. Working with the users is extremely rewarding as well. While we use the term “users” in almost all situations there are a wide variety of users that all need to be accommodated. Moreover, the domains that use visual analytics are varied and expanding. Just understanding the complexities of a number of these domains is exciting. Researchers are trying out different visualizations and interactions as well. And of course, the size and variety of data are expanding rapidly. User-centered evaluation in this context is rapidly changing. There are no standard processes and metrics and thus those of us working on user-centered evaluation must be creative in our work with both the users and with the researchers and developers.« less
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.
2018-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning. PMID:29379426
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A
2017-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.
The "wh" questions of visual phonics: what, who, where, when, and why.
Narr, Rachel F; Cawthon, Stephanie W
2011-01-01
Visual Phonics is a reading instructional tool that has been implemented in isolated classrooms for over 20 years. In the past 5 years, several experimental studies demonstrated its efficacy with students who are deaf or hard of hearing. Through a national survey with 200 participants, this study specifically addresses who, where, how, and why a sample of teachers use Visual Phonics in their everyday reading instruction. Through checklists of teaching practice, rating scales, and open-ended questions, teachers self-reported their use of Visual Phonics, reflected upon its efficacy, and what they think about using it with students with a diverse set of instructional needs. The majority reported that Visual Phonics was easy to use, engaging to students, and easy to integrate into a structured reading curriculum. The majority of respondents agreed that it helps increase phonemic awareness and decoding skills, build vocabulary, as well as increase reading comprehension. The implications of these findings in bridging the research-to-practice gap are discussed.
Schillinger, Dean; Machtinger, Edward L; Wang, Frances; Palacios, Jorge; Rodriguez, Maytrella; Bindman, Andrew
2006-01-01
Despite the importance of clinician-patient communication, little is known about rates and predictors of medication miscommunication. Measuring rates of miscommunication, as well as differences between verbal and visual modes of assessment, can inform efforts to more effectively communicate about medications. We studied 220 diverse patients in an anticoagulation clinic to assess concordance between patient and clinician reports of warfarin regimens. Bilingual research assistants asked patients to (1) verbalize their prescribed weekly warfarin regimen and (2) identify this regimen from a digitized color menu of warfarin pills. We obtained clinician reports of patient regimens from chart review. Patients were categorized as having regimen concordance if there were no patient-clinician discrepancies in total weekly dosage. We then examined whether verbal and visual concordance rates varied with patient's language and health literacy. Fifty percent of patients achieved verbal concordance and 66% achieved visual concordance with clinicians regarding the weekly warfarin regimen (P < .001). Being a Cantonese speaker and having inadequate health literacy were associated with a lower odds of verbal concordance compared with English speakers and subjects with adequate health literacy (AOR 0.44, 0.21-0.93, AOR 0.50, 0.26-0.99, respectively). Neither language nor health literacy was associated with visual discordance. Shifting from verbal to visual modes was associated with greater patient-provider concordance across all patient subgroups, but especially for those with communication barriers.Clinician-patient discordance regarding patients' warfarin regimen was common but occurred less frequently when patients used a visual aid. Visual aids may improve the accuracy of medication assessment, especially for patients with communication barriers.
High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired.
Moshtael, Howard; Aslam, Tariq; Underwood, Ian; Dhillon, Baljean
2015-08-01
Recent advances in digital image processing provide promising methods for maximizing the residual vision of the visually impaired. This paper seeks to introduce this field to the readership and describe its current state as found in the literature. A systematic search revealed 37 studies that measure the value of image processing techniques for subjects with low vision. The techniques used are categorized according to their effect and the principal findings are summarized. The majority of participants preferred enhanced images over the original for a wide range of enhancement types. Adapting the contrast and spatial frequency content often improved performance at object recognition and reading speed, as did techniques that attenuate the image background and a technique that induced jitter. A lack of consistency in preference and performance measures was found, as well as a lack of independent studies. Nevertheless, the promising results should encourage further research in order to allow their widespread use in low-vision aids.
Unfolding Visual Lexical Decision in Time
Barca, Laura; Pezzulo, Giovanni
2012-01-01
Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419
Parietal substrates for dimensional effects in visual search: evidence from lesion-symptom mapping
Humphreys, Glyn W.; Chechlacz, Magdalena
2013-01-01
In visual search, the detection of pop-out targets is facilitated when the target-defining dimension remains the same compared with when it changes across trials. We tested the brain regions necessary for these dimensional carry-over effects using a voxel-based morphometry study with brain-lesioned patients. Participants had to search for targets defined by either their colour (red or blue) or orientation (right- or left-tilted), and the target dimension either stayed the same or changed on consecutive trials. Twenty-five patients were categorized according to whether they showed an effect of dimensional change on search or not. The two groups did not differ with regard to their performance on several working memory tasks, and the dimensional carry-over effects were not correlated with working memory performance. With spatial, sustained attention and working memory deficits as well as lesion volume controlled, damage within the right inferior parietal lobule (the angular and supramarginal gyri) extending into the intraparietal sulcus was associated with an absence of dimensional carry-over (P < 0.001, cluster-level corrected for multiple comparisons). The data suggest that these regions of parietal cortex are necessary to implement attention shifting in the context of visual dimensional change. PMID:23404335
Genonets server-a web server for the construction, analysis and visualization of genotype networks.
Khalid, Fahad; Aguilar-Rodríguez, José; Wagner, Andreas; Payne, Joshua L
2016-07-08
A genotype network is a graph in which vertices represent genotypes that have the same phenotype. Edges connect vertices if their corresponding genotypes differ in a single small mutation. Genotype networks are used to study the organization of genotype spaces. They have shed light on the relationship between robustness and evolvability in biological systems as different as RNA macromolecules and transcriptional regulatory circuits. Despite the importance of genotype networks, no tool exists for their automatic construction, analysis and visualization. Here we fill this gap by presenting the Genonets Server, a tool that provides the following features: (i) the construction of genotype networks for categorical and univariate phenotypes from DNA, RNA, amino acid or binary sequences; (ii) analyses of genotype network topology and how it relates to robustness and evolvability, as well as analyses of genotype network topography and how it relates to the navigability of a genotype network via mutation and natural selection; (iii) multiple interactive visualizations that facilitate exploratory research and education. The Genonets Server is freely available at http://ieu-genonets.uzh.ch. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Modeling Image Patches with a Generic Dictionary of Mini-Epitomes
Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.
2015-01-01
The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859
Disentangling multimodal processes in social categorization.
Slepian, Michael L
2015-03-01
The current work examines the role of sensorimotor processes (manipulating whether visual exposure to hard and soft stimuli encourage sensorimotor simulation) and metaphor processes (assessing whether participants have understanding of a pertinent metaphor: "hard" Republicans and "soft" Democrats) in social categorization. Using new methodology to disassociate these multimodal processes (i.e., semantic, metaphoric, and sensorimotoric), the current work demonstrates that both sensorimotor and metaphor processes, combined, are needed to find an effect upon conceptual processing, providing evidence in support of the combined importance of these two theorized components. When participants comprehended the metaphor of hard Republicans and soft Democrats, and when encouraged to simulate sensorimotor experiences of hard and soft stimuli, those stimuli influenced categorization of faces as Republican and Democrat. Copyright © 2014 Elsevier B.V. All rights reserved.
Drug-nutrient interactions: a broad view with implications for practice.
Boullata, Joseph I; Hudson, Lauren M
2012-04-01
The relevance of drug?nutrient interactions in daily practice continues to grow with the widespread use of medication. Interactions can involve a single nutrient, multiple nutrients, food in general, or nutrition status. Mechanistically, drug?nutrient interactions occur because of altered intestinal transport and metabolism, or systemic distribution, metabolism and excretion, as well as additive or antagonistic effects. Optimal patient care includes identifying, evaluating, and managing these interactions. This task can be supported by a systematic approach for categorizing interactions and rating their clinical significance. This review provides such a broad framework using recent examples, as well as some classic drug?nutrient interactions. Pertinent definitions are presented, as is a suggested approach for clinicians. This important and expanding subject will benefit tremendously from further clinician involvement. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Re-Assessing Practice: Visual Art, Visually Impaired People and the Web.
ERIC Educational Resources Information Center
Howell, Caro; Porter, Dan
The latest development to come out of ongoing research at Tate Modern, London's new museum of modern art, is i-Map art resources for blind and partially sighted people that are delivered online. Currently i-Map explores the work of Matisse and Picasso, their innovations, influences and personal motivations, as well as key concepts in modern art.…
ERIC Educational Resources Information Center
Choo, Suzanne S.
2010-01-01
The English curriculum tends to be framed in relation to two unconscious boundaries based on the dichotomies between writing and reading as well as print and image. This paper re-envisions the curriculum as comprising a hybrid space where students are involved in composing texts that integrate writing and reading practices while also considering…
Werner, Sebastian; Noppeney, Uta
2010-08-01
Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.
A computational account of the development of the generalization of shape information.
Doumas, Leonidas A A; Hummel, John E
2010-05-01
Abecassis, Sera, Yonas, and Schwade (2001) showed that young children represent shapes more metrically, and perhaps more holistically, than do older children and adults. How does a child transition from representing objects and events as undifferentiated wholes to representing them explicitly in terms of their attributes? According to RBC (Recognition-by-Components theory; Biederman, 1987), objects are represented as collections of categorical geometric parts ("geons") in particular categorical spatial relations. We propose that the transition from holistic to more categorical visual shape processing is a function of the development of geon-like representations via a process of progressive intersection discovery. We present an account of this transition in terms of DORA (Doumas, Hummel, & Sandhofer, 2008), a model of the discovery of relational concepts. We demonstrate that DORA can learn representations of single geons by comparing objects composed of multiple geons. In addition, as DORA is learning it follows the same performance trajectory as children, originally generalizing shape more metrically/holistically and eventually generalizing categorically. Copyright © 2010 Cognitive Science Society, Inc.
Categorical Perception Beyond the Basic Level: The Case of Warm and Cool Colors.
Holmes, Kevin J; Regier, Terry
2017-05-01
Categories can affect our perception of the world, rendering between-category differences more salient than within-category ones. Across many studies, such categorical perception (CP) has been observed for the basic-level categories of one's native language. Other research points to categorical distinctions beyond the basic level, but it does not demonstrate CP for such distinctions. Here we provide such a demonstration. Specifically, we show CP in English speakers for the non-basic distinction between "warm" and "cool" colors, claimed to represent the earliest stage of color lexicon evolution. Notably, the advantage for discriminating colors that straddle the warm-cool boundary was restricted to the right visual field-the same behavioral signature previously observed for basic-level categories. This pattern held in a replication experiment with increased power. Our findings show that categorical distinctions beyond the basic-level repertoire of one's native language are psychologically salient and may be spontaneously accessed during normal perceptual processing. Copyright © 2016 Cognitive Science Society, Inc.
2011-01-01
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper. PMID:21410968
Kamel Boulos, Maged N; Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-03-16
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper.
Utility of remotely sensed data for identification of soil conservation practices
NASA Technical Reports Server (NTRS)
Pelletier, R. E.; Griffin, R. H.
1986-01-01
Discussed are a variety of remotely sensed data sources that may have utility in the identification of conservation practices and related linear features. Test sites were evaluated in Alabama, Kansas, Mississippi, and Oklahoma using one or more of a variety of remotely sensed data sources, including color infrared photography (CIR), LANDSAT Thematic Mapper (TM) data, and aircraft-acquired Thermal Infrared Multispectral Scanner (TIMS) data. Both visual examination and computer-implemented enhancement procedures were used to identify conservation practices and other linear features. For the Kansas, Mississippi, and Oklahoma test sites, photo interpretations of CIR identified up to 24 of the 109 conservation practices from a matrix derived from the SCS National Handbook of Conservation Practices. The conservation practice matrix was modified to predict the possibility of identifying the 109 practices at various photographic scales based on the observed results as well as photo interpreter experience. Some practices were successfully identified in TM data through visual identification, but a number of existing practices were of such size and shape that the resolution of the TM could not detect them accurately. A series of computer-automated decorrelation and filtering procedures served to enhance the conservation practices in TM data with only fair success. However, features such as field boundaries, roads, water bodies, and the Urban/Ag interface were easily differentiated. Similar enhancement techniques applied to 5 and 10 meter TIMS data proved much more useful in delineating terraces, grass waterways, and drainage ditches as well as the features mentioned above, due partly to improved resolution and partly to thermally influenced moisture conditions. Spatially oriented data such as those derived from remotely sensed data offer some promise in the inventory and monitoring of conservation practices as well as in supplying parameter data for a variety of computer-implemented agricultural models.
Teaching Abstract Concepts: Keys to the World of Ideas.
ERIC Educational Resources Information Center
Flatley, Joannis K.; Gittinger, Dennis J.
1990-01-01
Specific teaching strategies to help hearing-impaired secondary students comprehend abstract concepts include (1) pinpointing facts and fallacies, (2) organizing information visually, (3) categorizing ideas, and (4) reinforcing new vocabulary and concepts. Figures provide examples of strategy applications. (DB)
Category Membership and Semantic Coding in the Cerebral Hemispheres.
Turner, Casey E; Kellogg, Ronald T
2016-01-01
Although a gradient of category membership seems to form the internal structure of semantic categories, it is unclear whether the 2 hemispheres of the brain differ in terms of this gradient. The 2 experiments reported here examined this empirical question and explored alternative theoretical interpretations. Participants viewed category names centrally and determined whether a closely related or distantly related word presented to either the left visual field/right hemisphere (LVF/RH) or the right visual field/left hemisphere (RVF/LH) was a member of the category. Distantly related words were categorized more slowly in the LVF/RH relative to the RVF/LH, with no difference for words close to the prototype. The finding resolved past mixed results showing an unambiguous typicality effect for both visual field presentations. Furthermore, we examined items near the fuzzy border that were sometimes rejected as nonmembers of the category and found both hemispheres use the same category boundary. In Experiment 2, we presented 2 target words to be categorized, with the expectation of augmenting the speed advantage for the RVF/LH if the 2 hemispheres differ structurally. Instead the results showed a weakening of the hemispheric difference, arguing against a structural in favor of a processing explanation.
Differential Visual Processing of Animal Images, with and without Conscious Awareness
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106
Differential Visual Processing of Animal Images, with and without Conscious Awareness.
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.
Learning, retention, and generalization of haptic categories
NASA Astrophysics Data System (ADS)
Do, Phuong T.
This dissertation explored how haptic concepts are learned, retained, and generalized to the same or different modality. Participants learned to classify objects into three categories either visually or haptically via different training procedures, followed by an immediate or delayed transfer test. Experiment I involved visual versus haptic learning and transfer. Intermodal matching between vision and haptics was investigated in Experiment II. Experiments III and IV examined intersensory conflict in within- and between-category bimodal situations to determine the degree of perceptual dominance between sight and touch. Experiment V explored the intramodal relationship between similarity and categorization in a psychological space, as revealed by MDS analysis of similarity judgments. Major findings were: (1) visual examination resulted in relatively higher performance accuracy than haptic learning; (2) systematic training produced better category learning of haptic concepts across all modality conditions; (3) the category prototypes were rated newer than any transfer stimulus followed learning both immediately and after a week delay; and, (4) although they converged at the apex of two transformational trajectories, the category prototypes became more central to their respective categories and increasingly structured as a function of learning. Implications for theories of multimodal similarity and categorization behavior are discussed in terms of discrimination learning, sensory integration, and dominance relation.
Visual Disability Among Juvenile Open-angle Glaucoma Patients.
Gupta, Viney; Ganesan, Vaitheeswaran L; Kumar, Sandip; Chaurasia, Abadh K; Malhotra, Sumit; Gupta, Shikha
2018-04-01
Juvenile onset primary open-angle glaucoma (JOAG) unlike adult onset primary open-angle glaucoma presents with high intraocular pressure and diffuse visual field loss, which if left untreated leads to severe visual disability. The study aimed to evaluate the extent of visual disability among JOAG patients presenting to a tertiary eye care facility. Visual acuity and perimetry records of unrelated JOAG patients presenting to our Glaucoma facility were analyzed. Low vision and blindness was categorized by the WHO criteria and percentage impairment was calculated as per the guidelines provided by the American Medical Association (AMA). Fifty-two (15%) of the 348 JOAG patients were bilaterally blind at presentation and 32 (9%) had low vision according to WHO criteria. Ninety JOAG patients (26%) had a visual impairment of 75% or more. Visual disability at presentation among JOAG patients is high. This entails a huge economic burden, given their young age and associated social responsibilities.
Developing, deploying and reflecting on a web-based geologic simulation tool
NASA Astrophysics Data System (ADS)
Cockett, R.
2015-12-01
Geoscience is visual. It requires geoscientists to think and communicate about processes and events in three spatial dimensions and variations through time. This is hard(!), and students often have difficulty when learning and visualizing the three dimensional and temporal concepts. Visible Geology is an online geologic block modelling tool that is targeted at students in introductory and structural geology. With Visible Geology, students are able to combine geologic events in any order to create their own geologic models and ask 'what-if' questions, as well as interrogate their models using cross sections, boreholes and depth slices. Instructors use it as a simulation and communication tool in demonstrations, and students use it to explore concepts of relative geologic time, structural relationships, as well as visualize abstract geologic representations such as stereonets. The level of interactivity and creativity inherent in Visible Geology often results in a sense of ownership and encourages engagement, leading learners to practice visualization and interpretation skills and discover geologic relationships. Through its development over the last five years, Visible Geology has been used by over 300K students worldwide as well as in multiple targeted studies at the University of Calgary and at the University of British Columbia. The ease of use of the software has made this tool practical for deployment in classrooms of any size as well as for individual use. In this presentation, I will discuss the thoughts behind the implementation and layout of the tool, including a framework used for the development and design of new educational simulations. I will also share some of the surprising and unexpected observations on student interaction with the 3D visualizations, and other insights that are enabled by web-based development and deployment.
Olichney, John M; Riggins, Brock R; Hillert, Dieter G; Nowacki, Ralph; Tecoma, Evelyn; Kutas, Marta; Iragui, Vicente J
2002-07-01
We studied 14 patients with well-characterized refractory temporal lobe epilepsy (TLE), 7 with right temporal lobe epilepsy (RTE) and 7 with left temporal lobe epilepsy (LTE), on a word repetition ERP experiment. Much prior literature supports the view that patients with left TLE are more likely to develop verbal memory deficits, often attributable to left hippocampal sclerosis. Our main objectives were to test if abnormalities of the N400 or Late Positive Component (LPC, P600) were associated with a left temporal seizure focus, or left temporal lobe dysfunction. A minimum of 19 channels of EEG/EOG data were collected while subjects performed a semantic categorization task. Auditory category statements were followed by a visual target word, which were 50% "congruous" (category exemplars) and 50% "incongruous" (non-category exemplars) with the preceding semantic context. These auditory-visual pairings were repeated pseudo-randomly at time intervals ranging from approximately 10-140 seconds later. The ERP data were submitted to repeated-measures ANOVAs, which showed the RTE group had generally normal effects of word repetition on the LPC and the N400. Also, the N400 component was larger to incongruous than congruous new words, as is normally the case. In contrast, the LTE group did not have statistically significant effects of either word repetition or congruity on their ERPs (N400 or LPC), suggesting that this ERP semantic categorization paradigm is sensitive to left temporal lobe dysfunction. Further studies are ongoing to determine if these ERP abnormalities predict hippocampal sclerosis on histopathology, or outcome after anterior temporal lobectomy.
Securing recipiency in workplace meetings: Multimodal practices
Ford, Cecilia E.; Stickle, Trini
2013-01-01
As multiparty interactions with single courses of coordinated action, workplace meetings place particular interactional demands on participants who are not primary speakers (e.g. not chairs) as they work to initiate turns and to interactively coordinate with displays of recipiency from co-participants. Drawing from a corpus of 26 hours of videotaped workplace meetings in a midsized US city, this article reports on multimodal practices – phonetic, prosodic, and bodily-visual – used for coordinating turn transition and for consolidating recipiency in these specialized speech exchange systems. Practices used by self-selecting non-primary speakers as they secure turns in meetings include displays of close monitoring of current speakers’ emerging turn structure, displays of heightened interest as current turns approach possible completion, and turn initiation practices designed to pursue and, in a fine-tuned manner, coordinate with displays of recipiency on the parts of other participants as well as from reflexively constructed ‘target’ recipients. By attending to bodily-visual action, as well as phonetics and prosody, this study contributes to expanding accounts for turn taking beyond traditional word-based grammar (i.e. lexicon and syntax). PMID:24976789
What makes a visualization memorable?
Borkin, Michelle A; Vo, Azalea A; Bylinskii, Zoya; Isola, Phillip; Sunkavalli, Shashank; Oliva, Aude; Pfister, Hanspeter
2013-12-01
An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.
From Babies into Boys and Girls: The Acquisition of Gender Identity.
ERIC Educational Resources Information Center
Cahill, Spencer E.
Naturally occurring interactions recorded during participant observation in two preschools were analyzed in order to develop a distinctively sociological theory of gender identity acquisition. Attention was focused on the use of sex-categorical terms and the "grammar" of sex categorization practices underlying this usage. The analysis…
Dissimilarity in Creative Categorization
ERIC Educational Resources Information Center
Ranjan, Apara; Srinivasan, Narayanan
2010-01-01
Theories of categorization need to account for ways in which people use their creativity to categorize things, especially in the context of similarity. The current three-phase study is a preliminary attempt to understand how people group concepts together as well as to explore the role of similarity between concepts in creative categorization.…
Adeli, Hossein; Vitu, Françoise; Zelinsky, Gregory J
2017-02-08
Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts. SIGNIFICANCE STATEMENT The superior colliculus (SC) performs a visual-to-motor transformation vital to overt attention, but existing SC models cannot predict saccades to visually complex real-world stimuli. We introduce a brain-inspired SC model that outperforms state-of-the-art image-based competitors in predicting the sequences of fixations made by humans performing a range of everyday tasks (scene viewing and exemplar and categorical search), making clear the value of looking to the brain for model design. This work is significant in that it will drive new research by making computationally explicit predictions of SC neural population activity in response to naturalistic stimuli and tasks. It will also serve as a blueprint for the construction of other brain-inspired models, helping to usher in the next generation of truly intelligent autonomous systems. Copyright © 2017 the authors 0270-6474/17/371453-15$15.00/0.
Visual body recognition in a prosopagnosic patient.
Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M
2012-01-01
Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sereno, Anne B.; Lehky, Sidney R.
2011-01-01
Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT) and lateral intraparietal cortex (LIP) of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling), we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically) correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., “next to” or “above”). PMID:21344010
Wills, A J; Lea, Stephen E G; Leaver, Lisa A; Osthaus, Britta; Ryan, Catriona M E; Suret, Mark B; Bryant, Catherine M L; Chapman, Sue J A; Millar, Louise
2009-11-01
Pigeons (Columba livia), gray squirrels (Sciurus carolinensis), and undergraduates (Homo sapiens) learned discrimination tasks involving multiple mutually redundant dimensions. First, pigeons and undergraduates learned conditional discriminations between stimuli composed of three spatially separated dimensions, after first learning to discriminate the individual elements of the stimuli. When subsequently tested with stimuli in which one of the dimensions took an anomalous value, the majority of both species categorized test stimuli by their overall similarity to training stimuli. However some individuals of both species categorized them according to a single dimension. In a second set of experiments, squirrels, pigeons, and undergraduates learned go/no-go discriminations using multiple simultaneous presentations of stimuli composed of three spatially integrated, highly salient dimensions. The tendency to categorize test stimuli including anomalous dimension values unidimensionally was higher than in the first set of experiments and did not differ significantly between species. The authors conclude that unidimensional categorization of multidimensional stimuli is not diagnostic for analytic cognitive processing, and that any differences between human's and pigeons' behavior in such tasks are not due to special features of avian visual cognition.
Vanmarcke, Steven; Wagemans, Johan
2015-01-01
In everyday life, we are generally able to dynamically understand and adapt to socially (ir)elevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996), participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal) almost perfectly (even with 20 ms exposures). Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog) and effects of perceptual similarity (animals versus vehicles) and object category size (natural versus animal versus dog). In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli) and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men). PMID:26034569
Tradespace Exploration for the Engineering of Resilient Systems
2015-05-01
world scenarios. The types of tools within the SAE set include visualization, decision analysis, and M&S, so it is difficult to categorize this toolset... overpopulated , or questionable. ERS Tradespace Workshop Create predictive models using multiple techniques (e.g., regression, Kriging, neural nets
ERIC Educational Resources Information Center
Wagner, Elke
2004-01-01
Research has shown, and educators in the field have acknowledged, that students with visual impairments (those who are blind or have low vision) who are in general education, as well as in special education, settings, can lack social competence (Bauman, 1973; Davidow, 1974; Doll, 1953; Hatlen, 2000, 2003; Huebner, 1986; Sacks, Kekelis, &…
Myers, Dennis R; Sykes, Catherine; Myers, Scott
2008-01-01
This article offers practical guidance for educators as they prepare specialists to enhance the lives and communities of older persons through the strategic use of visual media in age-related courses. Advantages and disadvantages of this learning innovation are provided as well as seven approaches for enriching instruction. Resources are included for locating effective visual media, matching course content with video resources, determining fair use of copyrighted media, and inserting video clips into PowerPoint presentations. Strategies for accessing assistive services for implementing visual media in the classroom are also addressed. This article promotes the use of visual media for the purpose of enriching gerontological and geriatrics instruction for the adult learner.
Maxillary anterior papilla display during smiling: a clinical study of the interdental smile line.
Hochman, Mark N; Chu, Stephen J; Tarnow, Dennis P
2012-08-01
The purpose of this research was to quantify the visual display (presence) or lack of display (absence) of interdental papillae during maximum smiling in a patient population aged 10 to 89 years. Four hundred twenty digital single-lens reflex photographs of patients were taken and examined for the visual display of interdental papillae between the maxillary anterior teeth during maximum smiling. Three digital photographs were taken per patient from the frontal, right frontal-lateral, and left frontal-lateral views. The data set of photographs was examined by two examiners for the presence or absence of the visual display of papillae. The visual display of interdental papillae during maximum smiling occurred in 380 of the 420 patients examined in this study, equivalent to a 91% occurrence rate. Eighty-seven percent of all patients categorized as having a low gingival smile line (n = 303) were found to display the interdental papillae upon smiling. Differences were noted for individual age groups according to the decade of life as well as a trend toward decreasing papillary display with increasing age. The importance of interdental papillae display during dynamic smiling should not be left undiagnosed since it is visible in over 91% of older patients and in 87% of patients with a low gingival smile line, representing a common and important esthetic element that needs to be assessed during smile analysis of the patient.
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
MBSE-Driven Visualization of Requirements Allocation and Traceability
NASA Technical Reports Server (NTRS)
Jackson, Maddalena; Wilkerson, Marcus
2016-01-01
In a Model Based Systems Engineering (MBSE) infusion effort, there is a usually a concerted effort to define the information architecture, ontologies, and patterns that drive the construction and architecture of MBSE models, but less attention is given to the logical follow-on of that effort: how to practically leverage the resulting semantic richness of a well-formed populated model to enable systems engineers to work more effectively, as MBSE promises. While ontologies and patterns are absolutely necessary, an MBSE effort must also design and provide practical demonstration of value (through human-understandable representations of model data that address stakeholder concerns) or it will not succeed. This paper will discuss opportunities that exist for visualization in making the richness of a well-formed model accessible to stakeholders, specifically stakeholders who rely on the model for their day-to-day work. This paper will discuss the value added by MBSE-driven visualizations in the context of a small case study of interactive visualizations created and used on NASA's proposed Europa Mission. The case study visualizations were created for the purpose of understanding and exploring targeted aspects of requirements flow, allocation, and comparing the structure of that flow-down to a conceptual project decomposition. The work presented in this paper is an example of a product that leverages the richness and formalisms of our knowledge representation while also responding to the quality attributes SEs care about.
Impairments in part-whole representations of objects in two cases of integrative visual agnosia.
Behrmann, Marlene; Williams, Pepper
2007-10-01
How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.
Lindberg, Maria; Lindberg, Magnus; Skytt, Bernice
2017-05-01
Errors in infection control practices risk patient safety. The probability for errors can increase when care practices become more multifaceted. It is therefore fundamental to track risk behaviours and potential errors in various care situations. The aim of this study was to describe care situations involving risk behaviours for organism transmission that could lead to subsequent healthcare-associated infections. Unstructured nonparticipant observations were performed at three medical wards. Healthcare personnel (n=27) were shadowed, in total 39h, on randomly selected weekdays between 7:30 am and 12 noon. Content analysis was used to inductively categorize activities into tasks and based on the character into groups. Risk behaviours for organism transmission were deductively classified into types of errors. Multiple response crosstabs procedure was used to visualize the number and proportion of errors in tasks. One-Way ANOVA with Bonferroni post Hoc test was used to determine differences among the three groups of activities. The qualitative findings gives an understanding of that risk behaviours for organism transmission goes beyond the five moments of hand hygiene and also includes the handling and placement of materials and equipment. The tasks with the highest percentage of errors were; 'personal hygiene', 'elimination' and 'dressing/wound care'. The most common types of errors in all identified tasks were; 'hand disinfection', 'glove usage', and 'placement of materials'. Significantly more errors (p<0.0001) were observed the more multifaceted (single, combined or interrupted) the activity was. The numbers and types of errors as well as the character of activities performed in care situations described in this study confirm the need to improve current infection control practices. It is fundamental that healthcare personnel practice good hand hygiene however effective preventive hygiene is complex in healthcare activities due to the multifaceted care situations, especially when activities are interrupted. A deeper understanding of infection control practices that goes beyond the sense of security by means of hand disinfection and use of gloves is needed as materials and surfaces in the care environment might be contaminated and thus pose a risk for organism transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.
Giglhuber, Katrin; Maurer, Stefanie; Zimmer, Claus; Meyer, Bernhard; Krieg, Sandro M
2017-02-01
In clinical practice, repetitive navigated transcranial magnetic stimulation (rTMS) is of particular interest for non-invasive mapping of cortical language areas. Yet, rTMS studies try to detect further cortical functions. Damage to the underlying network of visuospatial attention function can result in visual neglect-a severe neurological deficit and influencing factor for a significantly reduced functional outcome. This investigation aims to evaluate the use of rTMS for evoking visual neglect in healthy volunteers and the potential of specifically locating cortical areas that can be assigned for the function of visuospatial attention. Ten healthy, right-handed subjects underwent rTMS visual neglect mapping. Repetitive trains of 5 Hz and 10 pulses were applied to 52 pre-defined cortical spots on each hemisphere; each cortical spot was stimulated 10 times. Visuospatial attention was tested time-locked to rTMS pulses by a landmark task. Task pictures were displayed tachistoscopically for 50 ms. The subjects' performance was analyzed by video, and errors were referenced to cortical spots. We observed visual neglect-like deficits during the stimulation of both hemispheres. Errors were categorized into leftward, rightward, and no response errors. Rightward errors occurred significantly more often during stimulation of the right hemisphere than during stimulation of the left hemisphere (mean rightward error rate (ER) 1.6 ± 1.3 % vs. 1.0 ± 1.0 %, p = 0.0141). Within the left hemisphere, we observed predominantly leftward errors rather than rightward errors (mean leftward ER 2.0 ± 1.3 % vs. rightward ER 1.0 ± 1.0 %; p = 0.0005). Visual neglect can be elicited non-invasively by rTMS, and cortical areas eloquent for visuospatial attention can be detected. Yet, the correlation of this approach with clinical findings has to be shown in upcoming steps.
Aggressive Bimodal Communication in Domestic Dogs, Canis familiaris.
Déaux, Éloïse C; Clarke, Jennifer A; Charrier, Isabelle
2015-01-01
Evidence of animal multimodal signalling is widespread and compelling. Dogs' aggressive vocalisations (growls and barks) have been extensively studied, but without any consideration of the simultaneously produced visual displays. In this study we aimed to categorize dogs' bimodal aggressive signals according to the redundant/non-redundant classification framework. We presented dogs with unimodal (audio or visual) or bimodal (audio-visual) stimuli and measured their gazing and motor behaviours. Responses did not qualitatively differ between the bimodal and two unimodal contexts, indicating that acoustic and visual signals provide redundant information. We could not further classify the signal as 'equivalent' or 'enhancing' as we found evidence for both subcategories. We discuss our findings in relation to the complex signal framework, and propose several hypotheses for this signal's function.
ERIC Educational Resources Information Center
Wilkerson-Jerde, Michelle Hoda
2014-01-01
There are increasing calls to prepare K-12 students to use computational tools and principles when exploring scientific or mathematical phenomena. The purpose of this paper is to explore whether and how constructionist computer-supported collaborative environments can explicitly engage students in this practice. The Categorizer is a…
ERIC Educational Resources Information Center
McGrath, Robert E.; Walters, Glenn D.
2012-01-01
Statistical analyses investigating latent structure can be divided into those that estimate structural model parameters and those that detect the structural model type. The most basic distinction among structure types is between categorical (discrete) and dimensional (continuous) models. It is a common, and potentially misleading, practice to…
Epstein, Arnold M.; Orav, E. John; Filice, Clara E.; Samson, Lok Wong; Joynt Maddox, Karen E.
2017-01-01
Importance Medicare recently launched the Physician Value-Based Payment Modifier (PVBM) Program, a mandatory pay-for-performance program for physician practices. Little is known about performance by practices that serve socially or medically high-risk patients. Objective To compare performance in the PVBM Program by practice characteristics. Design, Setting, and Participants Cross-sectional observational study using PVBM Program data for payments made in 2015 based on performance of large US physician practices caring for fee-for-service Medicare beneficiaries in 2013. Exposures High social risk (defined as practices in the top quartile of proportion of patients dually eligible for Medicare and Medicaid) and high medical risk (defined as practices in the top quartile of mean Hierarchical Condition Category risk score among fee-for-service beneficiaries). Main Outcomes and Measures Quality and cost z scores based on a composite of individual measures. Higher z scores reflect better performance on quality; lower scores, better performance on costs. Results Among 899 physician practices with 5 189 880 beneficiaries, 547 practices were categorized as low risk (neither high social nor high medical risk) (mean, 7909 beneficiaries; mean, 320 clinicians), 128 were high medical risk only (mean, 3675 beneficiaries; mean, 370 clinicians), 102 were high social risk only (mean, 1635 beneficiaries; mean, 284 clinicians), and 122 were high medical and social risk (mean, 1858 beneficiaries; mean, 269 clinicians). Practices categorized as low risk performed the best on the composite quality score (z score, 0.18 [95% CI, 0.09 to 0.28]) compared with each of the practices categorized as high risk (high medical risk only: z score, −0.55 [95% CI, −0.77 to −0.32]; high social risk only: z score, −0.86 [95% CI, −1.17 to −0.54]; and high medical and social risk: −0.78 [95% CI, −1.04 to −0.51]) (P < .001 across groups). Practices categorized as high social risk only performed the best on the composite cost score (z score, −0.52 [95% CI, −0.71 to −0.33]), low risk had the next best cost score (z score, −0.18 [95% CI, −0.25 to −0.10]), then high medical and social risk (z score, 0.40 [95% CI, 0.23 to 0.57]), and then high medical risk only (z score, 0.82 [95% CI, 0.65 to 0.99]) (P < .001 across groups). Total per capita costs were $9506 for practices categorized as low risk, $13 683 for high medical risk only, $8214 for high social risk only, and $11 692 for high medical and social risk. These patterns were associated with fewer bonuses and more penalties for high-risk practices. Conclusions and Relevance During the first year of the Medicare Physician Value-Based Payment Modifier Program, physician practices that served more socially high-risk patients had lower quality and lower costs, and practices that served more medically high-risk patients had lower quality and higher costs. PMID:28763549
Lost in Space: Future Force Leaders and Visualization of Space Operations
2004-05-26
use terms like “sixth sense” or ESP ( extrasensory perception ) when discussing acts that he categorizes as intuition.120 Intuition is a critical...on knowledge, judgment, experience, education, intelligence, boldness, perception , and character.”110 The manual states that intuitive decision
Population Education Accessions List, May-August 1999.
ERIC Educational Resources Information Center
United Nations Educational, Scientific and Cultural Organization, Bangkok (Thailand). Principal Regional Office for Asia and the Pacific.
This document is comprised of output from the Regional Clearinghouse on Population Education and Communication (RCPEC) computerized bibliographic database on reproductive and sexual health and geography. Entries are categorized into four parts: (1) "Population Education"; (2) "Knowledge-base Information"; (3) "Audio-Visual and IEC Materials; and…
Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules
NASA Astrophysics Data System (ADS)
Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.
Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.
Cohn, Neil
2014-01-01
How do people make sense of the sequential images in visual narratives like comics? A growing literature of recent research has suggested that this comprehension involves the interaction of multiple systems: The creation of meaning across sequential images relies on a "narrative grammar" that packages conceptual information into categorical roles organized in hierarchic constituents. These images are encapsulated into panels arranged in the layout of a physical page. Finally, how panels frame information can impact both the narrative structure and page layout. Altogether, these systems operate in parallel to construct the Gestalt whole of comprehension of this visual language found in comics.
Test of the neurolinguistic programming hypothesis that eye-movements relate to processing imagery.
Wertheim, E H; Habib, C; Cumming, G
1986-04-01
Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.
Foley, Alan R; Masingila, Joanna O
2015-07-01
In this paper, the authors explore the use of mobile devices as assistive technology for students with visual impairments in resource-limited environments. This paper provides initial data and analysis from an ongoing project in Kenya using tablet devices to provide access to education and independence for university students with visual impairments in Kenya. The project is a design-based research project in which we have developed and are refining a theoretically grounded intervention--a model for developing communities of practice to support the use of mobile technology as an assistive technology. We are collecting data to assess the efficacy and improve the model as well as inform the literature that has guided the design of the intervention. In examining the impact of the use of mobile devices for the students with visual impairments, we found that the devices provide the students with (a) access to education, (b) the means to participate in everyday life and (c) the opportunity to create a community of practice. Findings from this project suggest that communities of practice are both a viable and a valuable approach for facilitating the diffusion and support of mobile devices as assistive technology for students with visual impairments in resource-limited environments. Implications for Rehabilitation The use of mobile devices as assistive technology in resource-limited environments provides students with visual impairments access to education and enhanced means to participate in everyday life. Communities of practice are both a viable and a valuable approach for facilitating the diffusion and support of mobile devices as assistive technology for students with visual impairments in resource-limited environments. Providing access to assistive technology early and consistently throughout students' schooling builds both their skill and confidence and also demonstrates the capabilities of people with visual impairments to the larger society.
Tree Colors: Color Schemes for Tree-Structured Data.
Tennekes, Martijn; de Jonge, Edwin
2014-12-01
We present a method to map tree structures to colors from the Hue-Chroma-Luminance color model, which is known for its well balanced perceptual properties. The Tree Colors method can be tuned with several parameters, whose effect on the resulting color schemes is discussed in detail. We provide a free and open source implementation with sensible parameter defaults. Categorical data are very common in statistical graphics, and often these categories form a classification tree. We evaluate applying Tree Colors to tree structured data with a survey on a large group of users from a national statistical institute. Our user study suggests that Tree Colors are useful, not only for improving node-link diagrams, but also for unveiling tree structure in non-hierarchical visualizations.
A conceptual framework of computations in mid-level vision
Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P.
2014-01-01
If a picture is worth a thousand words, as an English idiom goes, what should those words—or, rather, descriptors—capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i) biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii) sufficiently robust to apply in practice on realistic images; and (iii) able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization) and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation, and so on), and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization). Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model's behavior and its limitations. PMID:25566044
A conceptual framework of computations in mid-level vision.
Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P
2014-01-01
If a picture is worth a thousand words, as an English idiom goes, what should those words-or, rather, descriptors-capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i) biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii) sufficiently robust to apply in practice on realistic images; and (iii) able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization) and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation, and so on), and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization). Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model's behavior and its limitations.
Callahan-Flintoft, Chloe; Wyble, Brad
2017-11-01
The visual system is able to detect targets according to a variety of criteria, such as by categorical (letter vs digit) or featural attributes (color). These criteria are often used interchangeably in rapid serial visual presentation (RSVP) studies but little is known about how rapidly they are processed. The aim of this work was to compare the time course of attentional selection and memory encoding for different types of target criteria. We conducted two experiments where participants reported one or two targets (T1, T2) presented in lateral RSVP streams. Targets were marked either by being a singleton color (red letter among black letters), being categorically distinct (digits among letters) or non-singleton color (target color letter among heterogeneously colored letters). Using event related potential (ERPs) associated with attention and memory encoding (the N2pc and the P3 respectively), we compared the relative latency of these two processing stages for these three kinds of targets. In addition to these ERP measures, we obtained convergent behavioral measures for attention and memory encoding by presenting two targets in immediate sequence and comparing their relative accuracy and proportion of temporal order errors. Both behavioral and EEG measures revealed that singleton color targets were attended much more quickly than either non-singleton color or categorical targets, and there was very little difference between attention latencies to non-singleton color and categorical targets. There was however a difference in the speed of memory encoding for non-singleton color and category latencies in both behavioral and EEG measures, which shows that encoding latency differences do not always mirror attention latency differences. Copyright © 2017 Elsevier Ltd. All rights reserved.
Twenty-Two Good Educational Practices.
ERIC Educational Resources Information Center
Mark, Jorie Lester
1989-01-01
Twenty-two educational practices are drawn from nine settings: K-12, vocational and continuing education, proprietary schools, military training, corporate training, union-sponsored programs, second-chance training, job training, and adult education. They are categorized as process-type practices, techniques, and physical teaching props. (SK)
ERIC Educational Resources Information Center
Bulut, Ergin
2013-01-01
In this article, the author draws specifically on the work of Walter Benjamin and engages with the world of video games by focusing on the constitution of labor as it unfolds in modding practices, as well as approaching the very act of seeing labor in a highly visual culture where value is extracted not just through the labor process but also…
Wilson, Jefferson R; Radcliff, Kris; Schroeder, Gregory; Booth, Madison; Lucasti, Christopher; Fehlings, Michael; Ahmad, Nassr; Vaccaro, Alexander; Arnold, Paul; Sciubba, Daniel; Ching, Alex; Smith, Justin; Shaffrey, Christopher; Singh, Kern; Darden, Bruce; Daffner, Scott; Cheng, Ivan; Ghogawala, Zoher; Ludwig, Steven; Buchowski, Jacob; Brodke, Darrel; Wang, Jeffrey; Lehman, Ronald A; Hilibrand, Alan; Yoon, Tim; Grauer, Jonathan; Dailey, Andrew; Steinmetz, Michael; Harrop, James S
2018-06-01
Anterior cervical discectomy and fusion has a low but well-established profile of adverse events. The goal of this study was to gauge surgeon opinion regarding the frequency and acceptability of these events. A 2-page survey was distributed to attendees at the 2015 Cervical Spine Research Society (CSRS) meeting. Respondents were asked to categorize 18 anterior cervical discectomy and fusion-related adverse events as either: "common and acceptable," "uncommon and acceptable," "uncommon and sometimes acceptable," or "uncommon and unacceptable." Results were compiled to generate the relative frequency of these responses for each complication. Responses for each complication event were also compared between respondents based on practice location (US vs. non-US), primary specialty (orthopedics vs. neurosurgery) and years in practice. Of 150 surveys distributed, 115 responses were received (76.7% response rate), with the majority of respondents found to be US-based (71.3%) orthopedic surgeons (82.6%). Wrong level surgery, esophageal injury, retained drain, and spinal cord injury were considered by most to be unacceptable and uncommon complications. Dysphagia and adjacent segment disease occurred most often, but were deemed acceptable complications. Although surgeon experience and primary specialty had little impact on responses, practice location was found to significantly influence responses for 12 of 18 complications, with non-US surgeons found to categorize events more toward the uncommon and unacceptable end of the spectrum as compared with US surgeons. These results serve to aid communication and transparency within the field of spine surgery, and will help to inform future quality improvement and best practice initiatives.
Late maturation of visual spatial integration in humans
Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György
1999-01-01
Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600
Mattys, Sven L; Scharenborg, Odette
2014-03-01
This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.
Crews, John E; Chou, Chiu-Fang; Zack, Matthew M; Zhang, Xinzhi; Bullard, Kai McKeever; Morse, Alan R; Saaddine, Jinan B
2016-06-01
To examine the association of health-related quality of life (HRQoL) with severity of visual impairment among people aged 40-64 years. We used cross-sectional data from the 2006-2010 Behavioral Risk Factor Surveillance System to examine six measures of HRQoL: self-reported health, physically unhealthy days, mentally unhealthy days, activity limitation days, life satisfaction, and disability. Visual impairment was categorized as no, a little, or moderate/severe. We examined the association between visual impairment and HRQoL using logistic regression accounting for the survey's complex design. Overall, 23.0% of the participants reported a little difficult seeing, while 16.8% reported moderate/severe difficulty seeing. People aged 40-64 years with moderate/severe visual impairment had more frequent (≥14) physically unhealthy days, mentally unhealthy days, and activity limitation days in the last 30 days, as well as greater life dissatisfaction, greater disability, and poorer health compared to people reporting no or a little visual impairment. After controlling for covariates (age, sex, marital status, race/ethnicity, education, income, state, year, health insurance, heart disease, stroke, heart attack, body mass index, leisure-time activity, smoking, and medical care costs), and compared to people with no visual impairment, those with moderate/severe visual impairment were more likely to have fair/poor health (odds ratio, OR, 2.01, 95% confidence interval, CI, 1.82-2.23), life dissatisfaction (OR 2.06, 95% CI 1.80-2.35), disability (OR 1.95, 95% CI 1.80-2.13), and frequent physically unhealthy days (OR 1.69, 95% CI 1.52-1.88), mentally unhealthy days (OR 1.84, 95% CI 1.66-2.05), and activity limitation days (OR 1.94, 95% CI 1.71-2.20; all p < 0.0001). Poor HRQoL was strongly associated with moderate/severe visual impairment among people aged 40-64 years.
Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2014-01-01
Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2016-07-01
A fundamental question in vision research is whether visual recognition is determined by edge-based information (e.g., edge, line, and conjunction) or surface-based information (e.g., color, brightness, and texture). To investigate this question, we manipulated the stimulus onset asynchrony (SOA) between the scene and the mask in a backward masking task of natural scene categorization. The behavioral results showed that correct classification was higher for line-drawings than for color photographs when the SOA was 13ms, but lower when the SOA was longer. The ERP results revealed that most latencies of early components were shorter for the line-drawings than for the color photographs, and the latencies gradually increased with the SOA for the color photographs but not for the line-drawings. The results provide new evidence that edge-based information is the primary determinant of natural scene categorization, receiving priority processing; by contrast, surface information takes longer to facilitate natural scene categorization. Copyright © 2016 Elsevier Inc. All rights reserved.
Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P
2016-02-15
Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.
Rapid discrimination of visual scene content in the human brain.
Anokhin, Andrey P; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W; Heath, Andrew C
2006-06-06
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n = 264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200 and 600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline region, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance.
Rapid discrimination of visual scene content in the human brain
Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.
2007-01-01
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815
Lu, Aitao; Yang, Ling; Yu, Yanping; Zhang, Meichao; Shao, Yulan; Zhang, Honghong
2014-08-01
The present study used the event-related potential technique to investigate the nature of linguistic effect on color perception. Four types of stimuli based on hue differences between a target color and a preceding color were used: zero hue step within-category color (0-WC); one hue step within-category color (1-WC); one hue step between-category color (1-BC); and two hue step between-category color (2-BC). The ERP results showed no significant effect of stimulus type in the 100-200 ms time window. However, in the 200-350 ms time window, ERP responses to 1-WC target color overlapped with that to 0-WC target color for right visual field (RVF) but not left visual field (LVF) presentation. For the 1-BC condition, ERP amplitudes were comparable in the two visual fields, both being significantly different from the 0-WC condition. The 2-BC condition showed the same pattern as the 1-BC condition. These results suggest that the categorical perception of color in RVF is due to linguistic suppression on within-category color discrimination but not between-category color enhancement, and that the effect is independent of early perceptual processes. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
Montessori Theory into Practice: A Practical Newsletter for NAMTA Members, 1995.
ERIC Educational Resources Information Center
Montessori Theory into Practice Newsletter, 1995
1995-01-01
These two issues of Theory into Practice, a practical newsletter of the North American Montessori Teachers Association (NAMTA), contains feature articles, upcoming events, and a job bulletin detailing employment opportunities categorized by state and country. The March issue focuses on television's effects and contains an article on the 1st Annual…
Measuring Student Teachers' Practices and Beliefs about Teaching Mathematics Using the Rasch Model
ERIC Educational Resources Information Center
Kaspersen, Eivind; Pepin, Birgit; Sikko, Svein Arne
2017-01-01
Several attempts have been made to measure and categorize beliefs and practices of mathematics teachers [Swan, M. 2006. "Designing and Using Research Instruments to Describe the Beliefs and Practices of Mathematics Teachers." "Research in Education" 75 (1): 58-70]. One of the reasons for measuring both beliefs and practices is…
Pradhan, Balaram; Mohanty, Soubhagyalaxmi; Hankey, Alex
2018-01-01
Attentional processes tend to be less well developed in the visually impaired, who require special training to develop them fully. Yogic breathing which alters the patterns of respiration has been shown to enhance attention skills. Letter cancellation tests are well-established tools to measure attention and attention span. Here, a modified Braille version of the six-letter cancellation test (SLCT) was used for students with visual impairment (VI). This study aimed to assess the immediate effects of Bhramari Pranayama (BhPr) and breath awareness (BA) on students with VI. This study was a self-as-control study held on 2 consecutive days, on 19 participants (8 males, 11 females), with a mean age of 15.89 ± 1.59 years, randomized into two groups. On the 1 st day, Group 1 performed 10 min breath awareness and Group 2 performed Bhramari ; on the 2 nd day, practices were reversed. Assessments used a SLCT specially adapted for the visually impaired before and after each session. The Braille letter cancellation test was successfully taken by 19 students. Scores significantly improved after both techniques for each student following practices on both days ( P < 0.001). BhPr may have more effect on attention performance than BA as wrong scores significantly increased following BA ( P < 0.05), but the increase in the score after Bhramari was not significant. Despite the small sample size improvement in attentional processes by both yoga breathing techniques was robust. Attentional skills were definitely enhanced. Long-term practice should be studied.
Pradhan, Balaram; Mohanty, Soubhagyalaxmi; Hankey, Alex
2018-01-01
Context: Attentional processes tend to be less well developed in the visually impaired, who require special training to develop them fully. Yogic breathing which alters the patterns of respiration has been shown to enhance attention skills. Letter cancellation tests are well-established tools to measure attention and attention span. Here, a modified Braille version of the six-letter cancellation test (SLCT) was used for students with visual impairment (VI). Aim: This study aimed to assess the immediate effects of Bhramari Pranayama (BhPr) and breath awareness (BA) on students with VI. Methods: This study was a self-as-control study held on 2 consecutive days, on 19 participants (8 males, 11 females), with a mean age of 15.89 ± 1.59 years, randomized into two groups. On the 1st day, Group 1 performed 10 min breath awareness and Group 2 performed Bhramari; on the 2nd day, practices were reversed. Assessments used a SLCT specially adapted for the visually impaired before and after each session. Results: The Braille letter cancellation test was successfully taken by 19 students. Scores significantly improved after both techniques for each student following practices on both days (P < 0.001). BhPr may have more effect on attention performance than BA as wrong scores significantly increased following BA (P < 0.05), but the increase in the score after Bhramari was not significant. Conclusions: Despite the small sample size improvement in attentional processes by both yoga breathing techniques was robust. Attentional skills were definitely enhanced. Long-term practice should be studied. PMID:29755219
2011-06-01
process would result in lower excess material and financial savings. The data set analyzed contained ISS purchased by both NAVAIR and NAVSEA as well as...demand data. Additionally, analysis of current ISS processes at NAVAIR and NAVSEA was conducted. The research resulted in finding that NAVAIR and...eventually categorized as excess material. A more efficient ISS process would result in lower excess material and financial savings. The data set analyzed
Establishing the situated features associated with perceived stress
Lebois, Lauren A.M.; Hertzog, Christopher; Slavich, George M.; Barrett, Lisa Feldman; Barsalou, Lawrence W.
2016-01-01
We propose that the domain general process of categorization contributes to the perception of stress. When a situation contains features associated with stressful experiences, it is categorized as stressful. From the perspective of situated cognition, the features used to categorize experiences as stressful are the features typically true of stressful situations. To test this hypothesis, we asked participants to evaluate the perceived stress of 572 imagined situations, and to also evaluate each situation for how much it possessed 19 features potentially associated with stressful situations and their processing (e.g., self-threat, familiarity, visual imagery, outcome certainty). Following variable reduction through factor analysis, a core set of 8 features associated with stressful situations—expectation violation, self-threat, coping efficacy, bodily experience, arousal, negative valence, positive valence, and perseveration—all loaded on a single Core Stress Features factor. In a multilevel model, this factor and an Imagery factor explained 88% of the variance in judgments of perceived stress, with significant random effects reflecting differences in how individual participants categorized stress. These results support the hypothesis that people categorize situations as stressful to the extent that typical features of stressful situations are present. To our knowledge, this is the first attempt to establish a comprehensive set of features that predicts perceived stress. PMID:27288834
Snack food as a modulator of human resting-state functional connectivity.
Mendez-Torrijos, Andrea; Kreitz, Silke; Ivan, Claudiu; Konerth, Laura; Rösch, Julie; Pischetsrieder, Monika; Moll, Gunther; Kratz, Oliver; Dörfler, Arnd; Horndasch, Stefanie; Hess, Andreas
2018-04-04
To elucidate the mechanisms of how snack foods may induce non-homeostatic food intake, we used resting state functional magnetic resonance imaging (fMRI), as resting state networks can individually adapt to experience after short time exposures. In addition, we used graph theoretical analysis together with machine learning techniques (support vector machine) to identifying biomarkers that can categorize between high-caloric (potato chips) vs. low-caloric (zucchini) food stimulation. Seventeen healthy human subjects with body mass index (BMI) 19 to 27 underwent 2 different fMRI sessions where an initial resting state scan was acquired, followed by visual presentation of different images of potato chips and zucchini. There was then a 5-minute pause to ingest food (day 1=potato chips, day 3=zucchini), followed by a second resting state scan. fMRI data were further analyzed using graph theory analysis and support vector machine techniques. Potato chips vs. zucchini stimulation led to significant connectivity changes. The support vector machine was able to accurately categorize the 2 types of food stimuli with 100% accuracy. Visual, auditory, and somatosensory structures, as well as thalamus, insula, and basal ganglia were found to be important for food classification. After potato chips consumption, the BMI was associated with the path length and degree in nucleus accumbens, middle temporal gyrus, and thalamus. The results suggest that high vs. low caloric food stimulation in healthy individuals can induce significant changes in resting state networks. These changes can be detected using graph theory measures in conjunction with support vector machine. Additionally, we found that the BMI affects the response of the nucleus accumbens when high caloric food is consumed.
ERIC Educational Resources Information Center
Richler, Jennifer J.; Gauthier, Isabel; Palmeri, Thomas J.
2011-01-01
Are there consequences of calling objects by their names? Lupyan (2008) suggested that overtly labeling objects impairs subsequent recognition memory because labeling shifts stored memory representations of objects toward the category prototype (representational shift hypothesis). In Experiment 1, we show that processing objects at the basic…
Dance and Imagery--The Link between Movement and Imagination.
ERIC Educational Resources Information Center
Smith, Karen Lynn, Ed.; And Others
1990-01-01
This feature examines the diverse nature of imagery, how images work, and the use of imagery--in creative dance for children, to enhance alignment, and as a therapeutic device. Also explored are creative visualization and research tools for observing and categorizing the use of images by dance teachers. (IAH)
Sheltered Workshops and Transition: Old Bottles, New Wine?
ERIC Educational Resources Information Center
Coombe, Edmund
This paper provides a historical overview of sheltered workshops and presents information about service innovations and mission expansion. The first workshop in the United States was the Perkins Institute, opened in 1837 for individuals with visual handicaps. This workshop was typical of "categorical" workshops that were established during this…
The Use of Aftereffects in the Study of Relationships among Emotion Categories
ERIC Educational Resources Information Center
Rutherford, M. D.; Chattha, Harnimrat Monica; Krysko, Kristen M.
2008-01-01
The perception of visual aftereffects has been long recognized, and these aftereffects reveal a relationship between perceptual categories. Thus, emotional expression aftereffects can be used to map the categorical relationships among emotion percepts. One might expect a symmetric relationship among categories, but an evolutionary, functional…
Categorical Effects in Children's Colour Search: A Cross-Linguistic Comparison
ERIC Educational Resources Information Center
Daoutis, Christine A.; Franklin, Anna; Riddett, Amy; Clifford, Alexandra; Davies, Ian R. L.
2006-01-01
In adults, visual search for a colour target is facilitated if the target and distractors fall in different colour categories (e.g. Daoutis, Pilling, & Davies, in press). The present study explored category effects in children's colour search. The relationship between linguistic colour categories and perceptual categories was addressed by…
Non-Linear Modeling of Growth Prerequisites in a Finnish Polytechnic Institution of Higher Education
ERIC Educational Resources Information Center
Nokelainen, Petri; Ruohotie, Pekka
2009-01-01
Purpose: This study aims to examine the factors of growth-oriented atmosphere in a Finnish polytechnic institution of higher education with categorical exploratory factor analysis, multidimensional scaling and Bayesian unsupervised model-based visualization. Design/methodology/approach: This study was designed to examine employee perceptions of…
Syntactic Categorization in French-Learning Infants
ERIC Educational Resources Information Center
Shi, Rushen; Melancon, Andreane
2010-01-01
Recent work showed that infants recognize and store function words starting from the age of 6-8 months. Using a visual fixation procedure, the present study tested whether French-learning 14-month-olds have the knowledge of syntactic categories of determiners and pronouns, respectively, and whether they can use these function words for…
ERIC Educational Resources Information Center
Qiao, Xiaomei; Forster, Kenneth; Witzel, Naoko
2009-01-01
Bowers, Davis, and Hanley (Bowers, J. S., Davis, C. J., & Hanley, D. A. (2005). "Interfering neighbours: The impact of novel word learning on the identification of visually similar words." "Cognition," 97(3), B45-B54) reported that if participants were trained to type nonwords such as "banara", subsequent semantic categorization responses to…
Aggressive Bimodal Communication in Domestic Dogs, Canis familiaris
Déaux, Éloïse C.; Clarke, Jennifer A.; Charrier, Isabelle
2015-01-01
Evidence of animal multimodal signalling is widespread and compelling. Dogs’ aggressive vocalisations (growls and barks) have been extensively studied, but without any consideration of the simultaneously produced visual displays. In this study we aimed to categorize dogs’ bimodal aggressive signals according to the redundant/non-redundant classification framework. We presented dogs with unimodal (audio or visual) or bimodal (audio-visual) stimuli and measured their gazing and motor behaviours. Responses did not qualitatively differ between the bimodal and two unimodal contexts, indicating that acoustic and visual signals provide redundant information. We could not further classify the signal as ‘equivalent’ or ‘enhancing’ as we found evidence for both subcategories. We discuss our findings in relation to the complex signal framework, and propose several hypotheses for this signal’s function. PMID:26571266
Vision-related fitness to drive mobility scooters: A practical driving test.
Cordes, Christina; Heutink, Joost; Tucha, Oliver M; Brookhuis, Karel A; Brouwer, Wiebo H; Melis-Dankers, Bart J M
2017-03-06
To investigate practical fitness to drive mobility scooters, comparing visually impaired participants with healthy controls. Between-subjects design. Forty-six visually impaired (13 with very low visual acuity, 10 with low visual acuity, 11 with peripheral field defects, 12 with multiple visual impairment) and 35 normal-sighted controls. Participants completed a practical mobility scooter test-drive, which was recorded on video. Two independent occupational therapists specialized in orientation and mobility evaluated the videos systematically. Approximately 90% of the visually impaired participants passed the driving test. On average, participants with visual impairments performed worse than normal-sighted controls, but were judged sufficiently safe. In particular, difficulties were observed in participants with peripheral visual field defects and those with a combination of low visual acuity and visual field defects. People with visual impairment are, in practice, fit to drive mobility scooters; thus visual impairment on its own should not be viewed as a determinant of safety to drive mobility scooters. However, special attention should be paid to individuals with visual field defects with or without a combined low visual acuity. The use of an individual practical fitness-to-drive test is advised.
Armson, Heather; Elmslie, Tom; Roder, Stefanie; Wakefield, Jacqueline
2015-01-01
This study categorizes 4 practice change options, including commitment-to-change (CTC) statements using Bloom's taxonomy to explore the relationship between a hierarchy of CTC statements and implementation of changes in practice. Our hypothesis was that deeper learning would be positively associated with implementation of planned practice changes. Thirty-five family physicians were recruited from existing practice-based small learning groups. They were asked to use their usual small-group process while exploring an educational module on peripheral neuropathy. Part of this process included the completion of a practice reflection tool (PRT) that incorporates CTC statements containing a broader set of practice change options-considering change, confirmation of practice, and not convinced a change is needed ("enhanced" CTC). The statements were categorized using Bloom's taxonomy and then compared to reported practice implementation after 3 months. Nearly all participants made a CTC statement and successful practice implementation at 3 months. By using the "enhanced" CTC options, additional components that contribute to practice change were captured. Unanticipated changes accounted for one-third of all successful changes. Categorizing statements on the PRT using Bloom's taxonomy highlighted the progression from knowledge/comprehension to application/analysis to synthesis/evaluation. All PRT statements were classified in the upper 2 levels of the taxonomy, and these higher-level (deep learning) statements were related to higher levels of practice implementation. The "enhanced" CTC options captured changes that would not otherwise be identified and may be worthy of further exploration in other CME activities. Using Bloom's taxonomy to code the PRT statements proved useful in highlighting the progression through increasing levels of cognitive complexity-reflecting deep learning. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
Ruys, Kirsten I; Dijksterhuis, Ap; Corneille, Olivier
2008-01-01
Numerous studies have shown that social categorization is a flexible process that partly depends on contextual variables. However, little is known about the role of affect in people's access to categorical dimensions. We investigated the hypothesis that social category activation is facilitated on evaluatively congruent dimensions. Two studies provide support for this evaluative-matching hypothesis, in which social categorization was found to be faster and more accurate for evaluatively congruent categories (i.e., unattractive foreigners, unattractive prostitutes, attractive fellow-citizens and attractive brides) than for evaluatively incongruent categories (i.e., attractive foreigners, attractive prostitutes, unattractive fellow-citizens and unattractive brides). We discuss the theoretical and practical implications of these findings.
Hallucinations Experienced by Visually Impaired: Charles Bonnet Syndrome.
Pang, Linda
2016-12-01
: Charles Bonnet Syndrome is a condition where visual hallucinations occur as a result of damage along the visual pathway. Patients with Charles Bonnet Syndrome maintain partial or full insight that the hallucinations are not real, absence of psychological conditions, and absence of hallucinations affecting other sensory modalities, while maintaining intact intellectual functioning. Charles Bonnet Syndrome has been well documented in neurologic, geriatric medicine, and psychiatric literature, but there is lack of information in optometric and ophthalmologic literature. Therefore, increased awareness of signs and symptoms associated with Charles Bonnet Syndrome is required among practicing clinicians. This review of the literature will also identify other etiologies of visual hallucinations, pathophysiology of Charles Bonnet Syndrome, and effective management strategies.
Hallucinations Experienced by Visually Impaired: Charles Bonnet Syndrome
Pang, Linda
2016-01-01
ABSTRACT Charles Bonnet Syndrome is a condition where visual hallucinations occur as a result of damage along the visual pathway. Patients with Charles Bonnet Syndrome maintain partial or full insight that the hallucinations are not real, absence of psychological conditions, and absence of hallucinations affecting other sensory modalities, while maintaining intact intellectual functioning. Charles Bonnet Syndrome has been well documented in neurologic, geriatric medicine, and psychiatric literature, but there is lack of information in optometric and ophthalmologic literature. Therefore, increased awareness of signs and symptoms associated with Charles Bonnet Syndrome is required among practicing clinicians. This review of the literature will also identify other etiologies of visual hallucinations, pathophysiology of Charles Bonnet Syndrome, and effective management strategies. PMID:27529611
Cohn, Neil
2014-01-01
How do people make sense of the sequential images in visual narratives like comics? A growing literature of recent research has suggested that this comprehension involves the interaction of multiple systems: The creation of meaning across sequential images relies on a “narrative grammar” that packages conceptual information into categorical roles organized in hierarchic constituents. These images are encapsulated into panels arranged in the layout of a physical page. Finally, how panels frame information can impact both the narrative structure and page layout. Altogether, these systems operate in parallel to construct the Gestalt whole of comprehension of this visual language found in comics. PMID:25071651
NASA Astrophysics Data System (ADS)
Derefeldt, Gunilla A. M.; Menu, Jean-Pierre; Swartling, Tiina
1995-04-01
This report surveys cognitive aspects of color in terms of behavioral, neuropsychological, and neurophysiological data. Color is usually defined as psychophysical color or as perceived color. Behavioral data on categorical color perception, absolute judgement of colors, color coding, visual search, and visual awareness refer to the more cognitive aspects of color. These are of major importance in visual synthesis and spatial organization, as already shown by the Gestalt psychologists. Neuropsychological and neurophysiological findings provide evidence for an interrelation between cognitive color and spatial organization. Color also enhances planning strategies, as has been shown by studies on color and eye movements. Memory colors and the color- language connections in the brain also belong among the cognitive aspects of color.
King, Andy J
2016-07-01
The present article reports an experiment investigating untested propositions of exemplification theory in the context of messages promoting early melanoma detection. The study tested visual exemplar presentation types, incorporating visual persuasion principles into the study of exemplification theory and strategic message design. Compared to a control condition, representative visual exemplification was more effective at increasing message effectiveness by eliciting a surprise response, which is consistent with predictions of exemplification theory. Furthermore, participant perception of congruency between the images and text interacted with the type of visual exemplification to explain variation in message effectiveness. Different messaging strategies influenced decision making as well, with the presentation of visual exemplars resulting in people judging the atypicality of moles more conservatively. Overall, results suggest that certain visual messaging strategies may result in unintended effects of presenting people information about skin cancer. Implications for practice are discussed.
Perceptual learning effect on decision and confidence thresholds.
Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano
2016-10-01
Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.
Kim, Scott Y H
2006-01-01
Most decision-making capacity (DMC) research has focused on measuring the decision-making abilities of patients, rather than on how such persons may be categorized as competent or incompetent. However, research ethics policies and practices either assume that we can differentiate or attempt to guide the differentiation of the competent from the incompetent. Thus there is a need to build on the recent advances in capacity research by conceptualizing and studying DMC as a categorical concept. This review discusses why there is a need for such research and addresses challenges and obstacles, both practical and theoretical. After a discussion of the potential obstacles and suggesting ways to overcome them, it discusses why clinicians with expertise in capacity assessments may be the best source of a provisional “gold standard” for criterion validation of categorical capacity status. The review provides discussions of selected key methodological issues in conducting research that treats DMC as a categorical concept, such as the issue of the optimal number of expert judges needed to generate a criterion standard and the kinds of information presented to the experts in obtaining their judgments. Future research needs are outlined. PMID:16177276
Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation
Khaligh-Razavi, Seyed-Mahdi; Kriegeskorte, Nikolaus
2014-01-01
Inferior temporal (IT) cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total), testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT. PMID:25375136
Task and spatial frequency modulations of object processing: an EEG study.
Craddock, Matt; Martinovic, Jasna; Müller, Matthias M
2013-01-01
Visual object processing may follow a coarse-to-fine sequence imposed by fast processing of low spatial frequencies (LSF) and slow processing of high spatial frequencies (HSF). Objects can be categorized at varying levels of specificity: the superordinate (e.g. animal), the basic (e.g. dog), or the subordinate (e.g. Border Collie). We tested whether superordinate and more specific categorization depend on different spatial frequency ranges, and whether any such dependencies might be revealed by or influence signals recorded using EEG. We used event-related potentials (ERPs) and time-frequency (TF) analysis to examine the time course of object processing while participants performed either a grammatical gender-classification task (which generally forces basic-level categorization) or a living/non-living judgement (superordinate categorization) on everyday, real-life objects. Objects were filtered to contain only HSF or LSF. We found a greater positivity and greater negativity for HSF than for LSF pictures in the P1 and N1 respectively, but no effects of task on either component. A later, fronto-central negativity (N350) was more negative in the gender-classification task than the superordinate categorization task, which may indicate that this component relates to semantic or syntactic processing. We found no significant effects of task or spatial frequency on evoked or total gamma band responses. Our results demonstrate early differences in processing of HSF and LSF content that were not modulated by categorization task, with later responses reflecting such higher-level cognitive factors.
Cedar Project---Original goals and progress to date
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cybenko, G.; Kuck, D.; Padua, D.
1990-11-28
This work encompasses a broad attack on high speed parallel processing. Hardware, software, applications development, and performance evaluation and visualization as well as research topics are proposed. Our goal is to develop practical parallel processing for the 1990's.
Commands, Competence, and "Carino": Maternal Socialization Practices in Mexican American Families
ERIC Educational Resources Information Center
Livas-Dlott, Alejandra; Fuller, Bruce; Stein, Gabriela L.; Bridges, Margaret; Mangual Figueroa, Ariana; Mireles, Laurie
2010-01-01
Early research on the socialization of Latino children has posited that mothers exercise authoritarian practices, compared with lateral reasoning (authoritative) strategies emphasized by Anglo mothers. This work aimed to categorize fixed types of parenting practices tied to the mother's personality rather than to culturally bounded contexts; it…
Mapping and Visualization of The Deepwater Horizon Oil Spill Using Satellite Imagery
NASA Astrophysics Data System (ADS)
Ferreira Pichardo, E.
2017-12-01
Satellites are man-made objects hovering around the Earth's orbit and are essential for Earth observation, i.e. the monitoring and gathering of data about the Earth's vital systems. Environmental Satellites are used for atmospheric research, weather forecasting, and warning as well as monitoring extreme weather events. These satellites are categorized into Geosynchronous and Low Earth (Polar) orbiting satellites. Visualizing satellite data is critical to understand the Earth's systems and changes to our environment. The objective of this research is to examine satellite-based remotely sensed data that needs to be processed and rendered in the form of maps or other forms of visualization to understand and interpret the satellites' observations to monitor the status, changes and evolution of the mega-disaster Deepwater Horizon Spill that occurred on April 20, 2010 in the Gulf of Mexico. In this project, we will use an array of tools and programs such as Python, CSPP and Linux. Also, we will use data from the National Oceanic and Atmospheric Administration (NOAA): Polar-Orbiting Satellites Terra Earth Observing System AM-1 (EOS AM-1), and Aqua EOS PM-1 to investigate the mega-disaster. Each of these satellites carry a variety of instruments, and we will use the data obtained from the remote sensor Moderate-Resolution Imaging Spectroradiometer (MODIS). Ultimately, this study shows the importance of mapping and visualizing data such as satellite data (MODIS) to understand the extents of environmental impacts disasters such as the Deepwater Horizon Oil spill.
Age-related macular degeneration changes the processing of visual scenes in the brain.
Ramanoël, Stephen; Chokron, Sylvie; Hera, Ruxandra; Kauffmann, Louise; Chiquet, Christophe; Krainik, Alexandre; Peyrin, Carole
2018-01-01
In age-related macular degeneration (AMD), the processing of fine details in a visual scene, based on a high spatial frequency processing, is impaired, while the processing of global shapes, based on a low spatial frequency processing, is relatively well preserved. The present fMRI study aimed to investigate the residual abilities and functional brain changes of spatial frequency processing in visual scenes in AMD patients. AMD patients and normally sighted elderly participants performed a categorization task using large black and white photographs of scenes (indoors vs. outdoors) filtered in low and high spatial frequencies, and nonfiltered. The study also explored the effect of luminance contrast on the processing of high spatial frequencies. The contrast across scenes was either unmodified or equalized using a root-mean-square contrast normalization in order to increase contrast in high-pass filtered scenes. Performance was lower for high-pass filtered scenes than for low-pass and nonfiltered scenes, for both AMD patients and controls. The deficit for processing high spatial frequencies was more pronounced in AMD patients than in controls and was associated with lower activity for patients than controls not only in the occipital areas dedicated to central and peripheral visual fields but also in a distant cerebral region specialized for scene perception, the parahippocampal place area. Increasing the contrast improved the processing of high spatial frequency content and spurred activation of the occipital cortex for AMD patients. These findings may lead to new perspectives for rehabilitation procedures for AMD patients.
Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi
2018-05-16
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Coronal Mass Ejection Data Clustering and Visualization of Decision Trees
NASA Astrophysics Data System (ADS)
Ma, Ruizhe; Angryk, Rafal A.; Riley, Pete; Filali Boubrahimi, Soukaina
2018-05-01
Coronal mass ejections (CMEs) can be categorized as either “magnetic clouds” (MCs) or non-MCs. Features such as a large magnetic field, low plasma-beta, and low proton temperature suggest that a CME event is also an MC event; however, so far there is neither a definitive method nor an automatic process to distinguish the two. Human labeling is time-consuming, and results can fluctuate owing to the imprecise definition of such events. In this study, we approach the problem of MC and non-MC distinction from a time series data analysis perspective and show how clustering can shed some light on this problem. Although many algorithms exist for traditional data clustering in the Euclidean space, they are not well suited for time series data. Problems such as inadequate distance measure, inaccurate cluster center description, and lack of intuitive cluster representations need to be addressed for effective time series clustering. Our data analysis in this work is twofold: clustering and visualization. For clustering we compared the results from the popular hierarchical agglomerative clustering technique to a distance density clustering heuristic we developed previously for time series data clustering. In both cases, dynamic time warping will be used for similarity measure. For classification as well as visualization, we use decision trees to aggregate single-dimensional clustering results to form a multidimensional time series decision tree, with averaged time series to present each decision. In this study, we achieved modest accuracy and, more importantly, an intuitive interpretation of how different parameters contribute to an MC event.
Arthur-Kelly, Michael; Sigafoos, Jeff; Green, Vanessa; Mathisen, Bernice; Arthur-Kelly, Racheal
2009-01-01
Visual supports are widely used and generally regarded as an effective resource for intervention with individuals who function on the autism spectrum. More cross-contextual research into their efficacy is required. In this article, we selectively review the research literature around visual supports based on an original conceptual model that highlights their contribution in the interpersonal social and communicative milieu of classrooms, homes and other daily living contexts. Attention is drawn to a range of practical and research issues and challenges in the use of visual supports as well as evidence of their effectiveness in enhancing participation, learning and social membership in this population. Areas for further research relating to the introduction and use of visual supports with the autism spectrum disorder population are identified.
Meaningful Peer Review in Radiology: A Review of Current Practices and Potential Future Directions.
Moriarity, Andrew K; Hawkins, C Matthew; Geis, J Raymond; Dreyer, Keith J; Kamer, Aaron P; Khandheria, Paras; Morey, Jose; Whitfill, James; Wiggins, Richard H; Itri, Jason N
2016-12-01
The current practice of peer review within radiology is well developed and widely implemented compared with other medical specialties. However, there are many factors that limit current peer review practices from reducing diagnostic errors and improving patient care. The development of "meaningful peer review" requires a transition away from compliance toward quality improvement, whereby the information and insights gained facilitate education and drive systematic improvements that reduce the frequency and impact of diagnostic error. The next generation of peer review requires significant improvements in IT functionality and integration, enabling features such as anonymization, adjudication by multiple specialists, categorization and analysis of errors, tracking, feedback, and easy export into teaching files and other media that require strong partnerships with vendors. In this article, the authors assess various peer review practices, with focused discussion on current limitations and future needs for meaningful peer review in radiology. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Multiple Transient Signals in Human Visual Cortex Associated with an Elementary Decision
Nolte, Guido
2017-01-01
The cerebral cortex continuously undergoes changes in its state, which are manifested in transient modulations of the cortical power spectrum. Cortical state changes also occur at full wakefulness and during rapid cognitive acts, such as perceptual decisions. Previous studies found a global modulation of beta-band (12–30 Hz) activity in human and monkey visual cortex during an elementary visual decision: reporting the appearance or disappearance of salient visual targets surrounded by a distractor. The previous studies disentangled neither the motor action associated with behavioral report nor other secondary processes, such as arousal, from perceptual decision processing per se. Here, we used magnetoencephalography in humans to pinpoint the factors underlying the beta-band modulation. We found that disappearances of a salient target were associated with beta-band suppression, and target reappearances with beta-band enhancement. This was true for both overt behavioral reports (immediate button presses) and silent counting of the perceptual events. This finding indicates that the beta-band modulation was unrelated to the execution of the motor act associated with a behavioral report of the perceptual decision. Further, changes in pupil-linked arousal, fixational eye movements, or gamma-band responses were not necessary for the beta-band modulation. Together, our results suggest that the beta-band modulation was a top-down signal associated with the process of converting graded perceptual signals into a categorical format underlying flexible behavior. This signal may have been fed back from brain regions involved in decision processing to visual cortex, thus enforcing a “decision-consistent” cortical state. SIGNIFICANCE STATEMENT Elementary visual decisions are associated with a rapid state change in visual cortex, indexed by a modulation of neural activity in the beta-frequency range. Such decisions are also followed by other events that might affect the state of visual cortex, including the motor command associated with the report of the decision, an increase in pupil-linked arousal, fixational eye movements, and fluctuations in bottom-up sensory processing. Here, we ruled out the necessity of these events for the beta-band modulation of visual cortex. We propose that the modulation reflects a decision-related state change, which is induced by the conversion of graded perceptual signals into a categorical format underlying behavior. The resulting decision signal may be fed back to visual cortex. PMID:28495972
Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios
2018-06-21
Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.
A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables
ERIC Educational Resources Information Center
Vernizzi, Graziano; Nakai, Miki
2015-01-01
It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…
Hepworth, Lauren R; Rowe, Fiona J
2018-02-01
The aim of this study was to ascertain what items stroke survivors and stroke care professionals think are important when assessing quality of life for stroke survivors with visual impairment for inclusion in the new patient-reported outcome measure. A reactive Delphi process was used in a three-round electronic-based survey. The items presented consisted of 62 items originally sourced from a systematic review of existing vision-related quality of life instruments and stroke survivor interviews, reduced and refined following a ranking exercise and pilot with stroke survivors with visual impairment. Stakeholders (stroke survivors/clinicians) were invited to take part in the process. A consensus definition of ≥70% was decided a priori. Participants were asked to rank importance on a 9-point scale and categorize the items by relevance to types of visual impairment following stroke or not relevant. Analysis of consensus, stability, and agreement was conducted. In total, 113 participants registered for the Delphi survey of which 47 (41.6%) completed all three rounds. Response rates to the three rounds were 78/113 (69.0%), 61/76 (81.3%), and 49/64 (76.6%), respectively. The participants included orthoptists (45.4%), occupational therapists (44.3%), and stroke survivors (10.3%). Consensus was reached on 56.5% of items in the three-round process, all for inclusion. A consensus was reached for 83.8% in the categorization of items. The majority (82.6%) of consensus were for relevant to 'all visual impairment following stroke'; two items were deemed 'not relevant'. The lack of item reduction achieved by this Delphi process highlights the need for additional methods of item reduction in the development of a new PROM for visual impairment following stroke. These results will be considered alongside Rasch analysis to achieve further item reduction. However, the Delphi survey remains important as it provides clinical and patient insight into each item rather than purely relying on the psychometric data.
Kirchoff, Bruce K; Leggett, Roxanne; Her, Va; Moua, Chue; Morrison, Jessica; Poole, Chamika
2011-01-01
Advances in digital imaging have made possible the creation of completely visual keys. By a visual key we mean a key based primarily on images, and that contains a minimal amount of text. Characters in visual keys are visually, not verbally defined. In this paper we create the first primarily visual key to a group of taxa, in this case the Fagaceae of the southeastern USA. We also modify our recently published set of best practices for image use in illustrated keys to make them applicable to visual keys. Photographs of the Fagaceae were obtained from internet and herbarium databases or were taken specifically for this project. The images were printed and then sorted into hierarchical groups. These hierarchical groups of images were used to create the 'couplets' in the key. A reciprocal process of key creation and testing was used to produce the final keys. Four keys were created, one for each of the parts-leaves, buds, fruits and bark. Species description pages consisting of multiple images were also created for each of the species in the key. Creation and testing of the key resulted in a modified list of best practices for image use visual keys. The inclusion of images into paper and electronic keys has greatly increased their ease of use. However, virtually all of these keys are still based upon verbally defined, atomistic characters. The creation of primarily visual keys allows us to overcome the well-known limitations of linguistic-based characters and create keys that are much easier to use, especially for botanical novices.
Code of Federal Regulations, 2011 CFR
2011-10-01
... sponsorship includes approved residencies in family practice, internal medicine, pediatrics, or a categorical first-year program in family practice, internal medicine, or pediatrics; (2) a rotating internship in...
Mapping the Structure of Knowledge for Teaching Nominal Categorical Data Analysis
ERIC Educational Resources Information Center
Groth, Randall E.; Bergner, Jennifer A.
2013-01-01
This report describes a model for mapping cognitive structures related to content knowledge for teaching. The model consists of knowledge elements pertinent to teaching a content domain, the nature of the connections among them, and a means for representing the elements and connections visually. The model is illustrated through empirical data…
Code of Federal Regulations, 2010 CFR
2010-01-01
... Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION AND TERMINOLOGY... into the United States. (2) Every lot of product shall routinely be given visual inspection by a... Food Safety and Inspection Service. The listing will categorize the kind or kinds of product 2 which...
Verbal Labels Modulate Perceptual Object Processing in 1-Year-Old Children
ERIC Educational Resources Information Center
Gliga, Teodora; Volein, Agnes; Csibra, Gergely
2010-01-01
Whether verbal labels help infants visually process and categorize objects is a contentious issue. Using electroencephalography, we investigated whether possessing familiar or novel labels for objects directly enhances 1-year-old children's neural processes underlying the perception of those objects. We found enhanced gamma-band (20-60 Hz)…
Cognitive and Cognate-Based Treatments for Bilingual Aphasia: A Case Study
ERIC Educational Resources Information Center
Kohnert, Kathryn
2004-01-01
Two consecutive treatments were conducted to investigate skill learning and generalization within and across cognitive-linguistic domains in a 62-year-old Spanish-English bilingual man with severe non-fluent aphasia. Treatment 1 was a cognitive-based treatment that emphasized non-linguistic skills, such as visual scanning, categorization, and…
ERIC Educational Resources Information Center
Amsel, Ben D.
2011-01-01
Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed…
ERIC Educational Resources Information Center
Scoresby, Jon; Shelton, Brett E.
2011-01-01
The mis-categorizing of cognitive states involved in learning within virtual environments has complicated instructional technology research. Further, most educational computer game research does not account for how learning activity is influenced by factors of game content and differences in viewing perspectives. This study is a qualitative…
ERIC Educational Resources Information Center
Veletsianos, George
2010-01-01
Humans draw on their stereotypic beliefs to make assumptions about others. Even though prior research has shown that individuals respond socially to media, there is little evidence with regards to learners stereotyping and categorizing pedagogical agents. This study investigated whether learners stereotype a pedagogical agent as being…
ERIC Educational Resources Information Center
Haack, Paul
1982-01-01
Research investigated how high school students conceptualize the basic Classical-Romantic values dichotomy as exemplified by various aesthetic eras, styles, and objects, and how students operate within such aesthetic-conceptual frameworks in terms of their preferences and identification-categorization abilities. (Author/AM)
Semantic, perceptual and number space: relations between category width and spatial processing.
Brugger, Peter; Loetscher, Tobias; Graves, Roger E; Knoch, Daria
2007-05-17
Coarse semantic encoding and broad categorization behavior are the hallmarks of the right cerebral hemisphere's contribution to language processing. We correlated 40 healthy subjects' breadth of categorization as assessed with Pettigrew's category width scale with lateral asymmetries in perceptual and representational space. Specifically, we hypothesized broader category width to be associated with larger leftward spatial biases. For the 20 men, but not the 20 women, this hypothesis was confirmed both in a lateralized tachistoscopic task with chimeric faces and a random digit generation task; the higher a male participant's score on category width, the more pronounced were his left-visual field bias in the judgement of chimeric faces and his small-number preference in digit generation ("small" is to the left of "large" in number space). Subjects' category width was unrelated to lateral displacements in a blindfolded tactile-motor rod centering task. These findings indicate that visual-spatial functions of the right hemisphere should not be considered independent of the same hemisphere's contribution to language. Linguistic and spatial cognition may be more tightly interwoven than is currently assumed.
Coding of visual object features and feature conjunctions in the human brain.
Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M
2008-01-01
Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.
Defining service and education: the first step to developing the correct balance.
Reines, H David; Robinson, Linda; Nitzchke, Stephanie; Rizzo, Anne
2007-08-01
Service and education activities have not been well defined or studied. The purpose of this study is to describe how attendings and residents categorize common resident activities on a service-education continuum. A web-based survey was designed to categorize resident activities. A panel of residents and surgical educators reviewed the survey for content validity. Residents and attendings categorized 27 resident activities on a 5-point scale from 1 (pure service) to 5 (pure education). Data analysis was performed using SPSS ver.12. 125 residents and 71 attendings from eight residency programs participated. 66% of residents and 90% of attendings were male. On average, attendings had practiced 14.3 years. Residents' post-graduate year ranged from PGY-1 to PGY-6 (mean of 2.78). Attendings and residents agreed on the categorization of most activities. Residents felt more time should be devoted to pure education than did attendings. Forty percent of residents felt that more than half of their time was spent in pure service versus 10% of attendings. Twenty-five percent of residents and 23% of attendings were dissatisfied with the service-education balance. The Residency Review Committee mandates that education is the central purpose of the surgical residency without clearly defining the balance between education and service. Attendings and residents agree on the educational value of most activities and that the balance between education and service is acceptable. When compared with attendings, residents feel they need significantly more time in education. Adequate learning can be facilitated by the development of clear definitions of service and education and guidelines for the distribution of resident time.
Structuring clinical workflows for diabetes care: an overview of the OntoHealth approach.
Schweitzer, M; Lasierra, N; Oberbichler, S; Toma, I; Fensel, A; Hoerbst, A
2014-01-01
Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view.
Structuring Clinical Workflows for Diabetes Care
Lasierra, N.; Oberbichler, S.; Toma, I.; Fensel, A.; Hoerbst, A.
2014-01-01
Summary Background Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. Objectives The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. Methods A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. Results This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. Conclusions The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view. PMID:25024765
Molnár, Csaba; Pongrácz, Péter; Miklósi, Adám
2010-05-01
Prerecorded family dog (Canis familiaris) barks were played back to groups of congenitally sightless, sightless with prior visual experience, and sighted people (none of whom had ever owned a dog). We found that blind people without any previous canine visual experiences can categorize accurately various dog barks recorded in different contexts, and their results are very close to those of sighted people in characterizing the emotional content of barks. These findings suggest that humans can recognize some of the most important motivational states reflecting, for example, fear or aggression in a dog's bark without any visual experience. It is very likely that this result can be generalized to other mammalian species--that is, no visual experience of another individual is needed for recognizing some of the most important motivational states of the caller.
DBMap: a TreeMap-based framework for data navigation and visualization of brain research registry
NASA Astrophysics Data System (ADS)
Zhang, Ming; Zhang, Hong; Tjandra, Donny; Wong, Stephen T. C.
2003-05-01
The purpose of this study is to investigate and apply a new, intuitive and space-conscious visualization framework to facilitate efficient data presentation and exploration of large-scale data warehouses. We have implemented the DBMap framework for the UCSF Brain Research Registry. Such a novel utility would facilitate medical specialists and clinical researchers in better exploring and evaluating a number of attributes organized in the brain research registry. The current UCSF Brain Research Registry consists of a federation of disease-oriented database modules, including Epilepsy, Brain Tumor, Intracerebral Hemorrphage, and CJD (Creuzfeld-Jacob disease). These database modules organize large volumes of imaging and non-imaging data to support Web-based clinical research. While the data warehouse supports general information retrieval and analysis, there lacks an effective way to visualize and present the voluminous and complex data stored. This study investigates whether the TreeMap algorithm can be adapted to display and navigate categorical biomedical data warehouse or registry. TreeMap is a space constrained graphical representation of large hierarchical data sets, mapped to a matrix of rectangles, whose size and color represent interested database fields. It allows the display of a large amount of numerical and categorical information in limited real estate of computer screen with an intuitive user interface. The paper will describe, DBMap, the proposed new data visualization framework for large biomedical databases. Built upon XML, Java and JDBC technologies, the prototype system includes a set of software modules that reside in the application server tier and provide interface to backend database tier and front-end Web tier of the brain registry.
Chun, Marvin M.; Kuhl, Brice A.
2013-01-01
Repeated exposure to a visual stimulus is associated with corresponding reductions in neural activity, particularly within visual cortical areas. It has been argued that this phenomenon of repetition suppression is related to increases in processing fluency or implicit memory. However, repetition of a visual stimulus can also be considered in terms of the similarity of the pattern of neural activity elicited at each exposure—a measure that has recently been linked to explicit memory. Despite the popularity of each of these measures, direct comparisons between the two have been limited, and the extent to which they differentially (or similarly) relate to behavioral measures of memory has not been clearly established. In the present study, we compared repetition suppression and pattern similarity as predictors of both implicit and explicit memory. Using functional magnetic resonance imaging, we scanned 20 participants while they viewed and categorized repeated presentations of scenes. Repetition priming (facilitated categorization across repetitions) was used as a measure of implicit memory, and subsequent scene recognition was used as a measure of explicit memory. We found that repetition priming was predicted by repetition suppression in prefrontal, parietal, and occipitotemporal regions; however, repetition priming was not predicted by pattern similarity. In contrast, subsequent explicit memory was predicted by pattern similarity (across repetitions) in some of the same occipitotemporal regions that exhibited a relationship between priming and repetition suppression; however, explicit memory was not related to repetition suppression. This striking double dissociation indicates that repetition suppression and pattern similarity differentially track implicit and explicit learning. PMID:24027275
Zheng, D Diane; Christ, Sharon L; Lam, Byron L; Arheart, Kristopher L; Galor, Anat; Lee, David J
2012-05-14
Mechanisms by which visual impairment (VI) increases mortality risk are poorly understood. We estimated the direct and indirect effects of self-rated VI on risk of mortality through mental well-being and preventive care practice mechanisms. Using complete data from 12,987 adult participants of the 2000 Medical Expenditure Panel Survey with mortality linkage through 2006, we undertook structural equation modeling using two latent variables representing mental well-being and poor preventive care to examine multiple effect pathways of self-rated VI on all-cause mortality. Generalized linear structural equation modeling was used to simultaneously estimate pathways including the latent variables and Cox regression model, with adjustment for controls and the complex sample survey design. VI increased the risk of mortality directly after adjusting for mental well-being and other covariates (hazard ratio [HR] = 1.25 [95% confidence interval: 1.01, 1.55]). Poor preventive care practices were unrelated to VI and to mortality. Mental well-being decreased mortality risk (HR = 0.68 [0.64, 0.74], P < 0.001). VI adversely affected mental well-being (β = -0.54 [-0.65, -0.43]; P < 0.001). VI also increased mortality risk indirectly through mental well-being (HR = 1.23 [1.16, 1.30]). The total effect of VI on mortality including its influence through mental well-being was HR 1.53 [1.24, 1.90]. Similar but slightly stronger patterns of association were found when examining cardiovascular disease-related mortality, but not cancer-related mortality. VI increases the risk of mortality directly and indirectly through its adverse impact on mental well-being. Prevention of disabling ocular conditions remains a public health priority along with more aggressive diagnosis and treatment of depression and other mental health conditions in those living with VI.
Bodily-visual practices and turn continuation
Ford, Cecilia E.; Thompson, Sandra A.; Drake, Veronika
2012-01-01
This paper considers points in turn construction where conversation researchers have shown that talk routinely continues beyond possible turn completion, but where we find bodily-visual behavior doing such turn extension work. The bodily-visual behaviors we examine share many features with verbal turn extensions, but we argue that embodied movements have distinct properties that make them well-suited for specific kinds of social action, including stance display and by-play in sequences framed as subsidiary to a simultaneous and related verbal exchange. Our study is in line with a research agenda taking seriously the point made by Goodwin (2000a, b, 2003), Hayashi (2003, 2005), Iwasaki (2009), and others that scholars seeking to account for practices in language and social interaction do themselves a disservice if they privilege the verbal dimension; rather, as suggested in Stivers/Sidnell (2005), each semiotic system/modality, while coordinated with others, has its own organization. With the current exploration of bodily-visual turn extensions, we hope to contribute to a growing understanding of how these different modes of organization are managed concurrently and in concert by interactants in carrying out their everyday social actions. PMID:23526861
Lightness Constancy in Surface Visualization
Szafir, Danielle Albers; Sarikaya, Alper; Gleicher, Michael
2016-01-01
Color is a common channel for displaying data in surface visualization, but is affected by the shadows and shading used to convey surface depth and shape. Understanding encoded data in the context of surface structure is critical for effective analysis in a variety of domains, such as in molecular biology. In the physical world, lightness constancy allows people to accurately perceive shadowed colors; however, its effectiveness in complex synthetic environments such as surface visualizations is not well understood. We report a series of crowdsourced and laboratory studies that confirm the existence of lightness constancy effects for molecular surface visualizations using ambient occlusion. We provide empirical evidence of how common visualization design decisions can impact viewers’ abilities to accurately identify encoded surface colors. These findings suggest that lightness constancy aids in understanding color encodings in surface visualization and reveal a correlation between visualization techniques that improve color interpretation in shadow and those that enhance perceptions of surface depth. These results collectively suggest that understanding constancy in practice can inform effective visualization design. PMID:26584495
A reappraisal of the uncanny valley: categorical perception or frequency-based sensitization?
Burleigh, Tyler J.; Schoenherr, Jordan R.
2015-01-01
The uncanny valley (UCV) hypothesis describes a non-linear relationship between perceived human-likeness and affective response. The “uncanny valley” refers to an intermediate level of human-likeness that is associated with strong negative affect. Recent studies have suggested that the uncanny valley might result from the categorical perception of human-like stimuli during identification. When presented with stimuli sharing human-like traits, participants attempt to segment the continuum in “human” and “non-human” categories. Due to the ambiguity of stimuli located at a category boundary, categorization difficulty gives rise to a strong, negative affective response. Importantly, researchers who have studied the UCV in terms of categorical perception have focused on categorization responses rather than affective ratings. In the present study, we examined whether the negative affect associated with the UCV might be explained in terms of an individual's degree of exposure to stimuli. In two experiments, we tested a frequency-based model against a categorical perception model using a category-learning paradigm. We manipulated the frequency of exemplars that were presented to participants from two categories during a training phase. We then examined categorization and affective responses functions, as well as the relationship between categorization and affective responses. Supporting previous findings, categorization responses suggested that participants acquired novel category structures that reflected a category boundary. These category structures appeared to influence affective ratings of eeriness. Crucially, participants' ratings of eeriness were additionally affected by exemplar frequency. Taken together, these findings suggest that the UCV is determined by both categorical properties as well as the frequency of individual exemplars retained in memory. PMID:25653623
Using the Laboratory to Engage All Students in Science Practices
ERIC Educational Resources Information Center
Walker, J. P.; Sampson, V.; Southerland, S.; Enderle, P. J.
2016-01-01
This study examines the extent to which the type of instruction used during a general chemistry laboratory course affects students' ability to use core ideas to engage in science practices. We use Ford's (2008) description of the nature of scientific practices to categorize what students do in the laboratory as either empirical or…
VISUAL ACUITY IN PSEUDOXANTHOMA ELASTICUM.
Risseeuw, Sara; Ossewaarde-van Norel, Jeannette; Klaver, Caroline C W; Colijn, Johanna M; Imhof, Saskia M; van Leeuwen, Redmer
2018-04-12
To assess the age-specific proportion of visual impairment in patients with pseudoxanthoma elasticum (PXE) and to compare this with foveal abnormality and similar data of late age-related macular degeneration patients. Cross-sectional data of 195 patients with PXE were reviewed, including best-corrected visual acuity and imaging. The World Health Organisation criteria were used to categorize bilateral visual impairment. These results were compared with similar data of 131 patients with late age-related macular degeneration from the Rotterdam study. Overall, 50 PXE patients (26.0%) were visually impaired, including 21 (11%) with legal blindness. Visual functioning declined with increasing age. In patients older than 50 years, 37% was visually impaired and 15% legally blind. Foveal choroidal neovascularization was found in 84% of eyes with a best-corrected visual acuity lower than 20/70 (0.30) and macular atrophy in the fovea in 16%. In late age-related macular degeneration patients, 40% were visually impaired and 13% legally blind. Visual impairment started approximately 20 years later as compared with PXE patients. Visual impairment and blindness are frequent in PXE, particularly in patients older than 50 years. Although choroidal neovascularization is associated with the majority of vision loss, macular atrophy is also common. The proportion of visual impairment in PXE is comparable with late age-related macular degeneration but manifests earlier in life.
The Sound of Stigmatization: Sonic Habitus, Sonic Styles, and Boundary Work in an Urban Slum.
Schwarz, Ori
2015-07-01
Based on focus groups and interviews with student renters in an Israeli slum, the article explores the contributions of differences in sonic styles and sensibilities to boundary work, social categorization, and evaluation. Alongside visual cues such as broken windows, bad neighborhoods are characterized by sonic cues, such as shouts from windows. Students understand "being ghetto" as being loud in a particular way and use loudness as a central resource in their boundary work. Loudness is read as a performative index of class and ethnicity, and the performance of middle-class studentship entails being appalled by stigmatized sonic practices and participating in their exoticization. However, the sonic is not merely yet another resource of boundary work. Paying sociological attention to senses other than vision reveals complex interactions between structures anchored in the body, structures anchored in language, and actors' identification strategies, which may refine theorizations of the body and the senses in social theory.
The neural bases of spatial frequency processing during scene perception
Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole
2014-01-01
Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226
Pigeons and humans use action and pose information to categorize complex human behaviors.
Qadri, Muhammad A J; Cook, Robert G
2017-02-01
The biological mechanisms used to categorize and recognize behaviors are poorly understood in both human and non-human animals. Using animated digital models, we have recently shown that pigeons can categorize different locomotive animal gaits and types of complex human behaviors. In the current experiments, pigeons (go/no-go task) and humans (choice task) both learned to conditionally categorize two categories of human behaviors that did not repeat and were comprised of the coordinated motions of multiple limbs. These "martial arts" and "Indian dance" action sequences were depicted by a digital human model. Depending upon whether the model was in motion or not, each species was required to engage in different and opposing responses to the two behavioral categories. Both species learned to conditionally and correctly act on this dynamic and static behavioral information, indicating that both species use a combination of static pose cues that are available from stimulus onset in addition to less rapidly available action information in order to successfully discriminate between the behaviors. Human participants additionally demonstrated a bias towards the dynamic information in the display when re-learning the task. Theories that rely on generalized, non-specific visual mechanisms involving channels for motion and static cues offer a parsimonious account of how humans and pigeons recognize and categorize behaviors within and across species. Copyright © 2016 Elsevier Ltd. All rights reserved.
Categorical encoding of color in the brain.
Bird, Chris M; Berens, Samuel C; Horner, Aidan J; Franklin, Anna
2014-03-25
The areas of the brain that encode color categorically have not yet been reliably identified. Here, we used functional MRI adaptation to identify neuronal populations that represent color categories irrespective of metric differences in color. Two colors were successively presented within a block of trials. The two colors were either from the same or different categories (e.g., "blue 1 and blue 2" or "blue 1 and green 1"), and the size of the hue difference was varied. Participants performed a target detection task unrelated to the difference in color. In the middle frontal gyrus of both hemispheres and to a lesser extent, the cerebellum, blood-oxygen level-dependent response was greater for colors from different categories relative to colors from the same category. Importantly, activation in these regions was not modulated by the size of the hue difference, suggesting that neurons in these regions represent color categorically, regardless of metric color difference. Representational similarity analyses, which investigated the similarity of the pattern of activity across local groups of voxels, identified other regions of the brain (including the visual cortex), which responded to metric but not categorical color differences. Therefore, categorical and metric hue differences appear to be coded in qualitatively different ways and in different brain regions. These findings have implications for the long-standing debate on the origin and nature of color categories, and also further our understanding of how color is processed by the brain.
Beyond Exemplars and Prototypes as Memory Representations of Natural Concepts: A Clustering Approach
ERIC Educational Resources Information Center
Verbeemen, Timothy; Vanpaemel, Wolf; Pattyn, Sven; Storms, Gert; Verguts, Tom
2007-01-01
Categorization in well-known natural concepts is studied using a special version of the Varying Abstraction Framework (Vanpaemel, W., & Storms, G. (2006). A varying abstraction framework for categorization. Manuscript submitted for publication; Vanpaemel, W., Storms, G., & Ons, B. (2005). A varying abstraction model for categorization. In B. Bara,…
ERIC Educational Resources Information Center
Heit, Evan; Hayes, Brett K.
2005-01-01
V. M. Sloutsky and A. V. Fisher reported 5 experiments documenting relations among categorization, induction, recognition, and similarity in children as well as adults and proposed a new model of induction, SINC (similarity, induction, categorization). Those authors concluded that induction depends on perceptual similarity rather than conceptual…
Categorical Data Analysis Using a Skewed Weibull Regression Model
NASA Astrophysics Data System (ADS)
Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano
2018-03-01
In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.
Map visualization of groundwater withdrawals at the sub-basin scale
NASA Astrophysics Data System (ADS)
Goode, Daniel J.
2016-06-01
A simple method is proposed to visualize the magnitude of groundwater withdrawals from wells relative to user-defined water-resource metrics. The map is solely an illustration of the withdrawal magnitudes, spatially centered on wells—it is not capture zones or source areas contributing recharge to wells. Common practice is to scale the size (area) of withdrawal well symbols proportional to pumping rate. Symbols are drawn large enough to be visible, but not so large that they overlap excessively. In contrast to such graphics-based symbol sizes, the proposed method uses a depth-rate index (length per time) to visualize the well withdrawal rates by volumetrically consistent areas, called "footprints". The area of each individual well's footprint is the withdrawal rate divided by the depth-rate index. For example, the groundwater recharge rate could be used as a depth-rate index to show how large withdrawals are relative to that recharge. To account for the interference of nearby wells, composite footprints are computed by iterative nearest-neighbor distribution of excess withdrawals on a computational and display grid having uniform square cells. The map shows circular footprints at individual isolated wells and merged footprint areas where wells' individual footprints overlap. Examples are presented for depth-rate indexes corresponding to recharge, to spatially variable stream baseflow (normalized by basin area), and to the average rate of water-table decline (scaled by specific yield). These depth-rate indexes are water-resource metrics, and the footprints visualize the magnitude of withdrawals relative to these metrics.
Contini, Erika W; Wardle, Susan G; Carlson, Thomas A
2017-10-01
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Right-Brained Kids in Left-Brained Schools
ERIC Educational Resources Information Center
Hunter, Madeline
1976-01-01
Students who learn well through left hemisphere brain input (oral and written) have minimal practice in using the right hemisphere, while those who are more proficient in right hemisphere (visual) input processing are handicapped by having to use primarily their left brains. (MB)
The Cooperate Assistive Teamwork Environment for Software Description Languages.
Groenda, Henning; Seifermann, Stephan; Müller, Karin; Jaworek, Gerhard
2015-01-01
Versatile description languages such as the Unified Modeling Language (UML) are commonly used in software engineering across different application domains in theory and practice. They often use graphical notations and leverage visual memory for expressing complex relations. Those notations are hard to access for people with visual impairment and impede their smooth inclusion in an engineering team. Existing approaches provide textual notations but require manual synchronization between the notations. This paper presents requirements for an accessible and language-aware team work environment as well as our plan for the assistive implementation of Cooperate. An industrial software engineering team consisting of people with and without visual impairment will evaluate the implementation.
Putting the process of care into practice.
Houck, S; Baum, N
1997-01-01
"Putting the process of care into practice" provides an interactive, visual model of outpatient resources and processes. It illustrates an episode of care from a fee-for-service as well as managed care perspective. The Care Process Matrix can be used for planning and staffing, as well as retrospectively to assess appropriate resource use within a practice. It identifies effective strategies for reducing the cost per episode of care and optimizing quality while moving from managing costs to managing the care process. Because of an overbuilt health care system, including an oversupply of physicians, success in the future will require redesigning the process of care and a coherent customer service strategy. The growing complexities of practice will require physicians to focus on several key competencies while outsourcing other functions such as billing and contracting.
Safe prescribing practices in pregnancy and lactation.
Hansen, Wendy F; Peacock, Anne E; Yankowitz, Jerome
2002-01-01
Midwives and other health care providers face a dilemma when a pregnant woman develops a condition that usually is treated with a pharmacologic agent. Understanding of basic teratology associated with drugs as well as the FDA categorization of agents can assist professionals in recognizing which pharmaceuticals should be used or avoided. In addition to reviewing teratology, this article addresses the use of common drugs for the treatment of upper respiratory conditions, minor pain, gastrointestinal problems, psychiatric illnesses, and neurologic disorders. In each category, current evidence is presented pertaining to which agents should be recommended for pregnant women.
NASA Astrophysics Data System (ADS)
Hatfield, Fraser N.; Dehmeshki, Jamshid
1998-09-01
Neurosurgery is an extremely specialized area of medical practice, requiring many years of training. It has been suggested that virtual reality models of the complex structures within the brain may aid in the training of neurosurgeons as well as playing an important role in the preparation for surgery. This paper focuses on the application of a probabilistic neural network to the automatic segmentation of the ventricles from magnetic resonance images of the brain, and their three dimensional visualization.
2000-10-01
To investigate the association between control of intraocular pressure after surgical intervention for glaucoma and visual field deterioration. In the Advanced Glaucoma Intervention Study, eyes were randomly assigned to one of two sequences of glaucoma surgery, one beginning with argon laser trabeculoplasty and the other trabeculectomy. In the present article we examine the relationship between intraocular pressure and progression of visual field damage over 6 or more years of follow-up. In the first analysis, designated Predictive Analysis, we categorize 738 eyes into three groups based on intraocular pressure determinations over the first three 6-month follow-up visits. In the second analysis, designated Associative Analysis, we categorize 586 eyes into four groups based on the percent of 6-month visits over the first 6 follow-up years in which eyes presented with intraocular pressure less than 18 mm Hg. The outcome measure in both analyses is change from baseline in follow-up visual field defect score (range, 0 to 20 units). In the Predictive Analysis, eyes with early average intraocular pressure greater than 17.5 mm Hg had an estimated worsening during subsequent follow-up that was 1 unit of visual field defect score greater than eyes with average intraocular pressure less than 14 mm Hg (P =.002). This amount of worsening was greater at 7 years (1.89 units; P <.001) than at 2 years (0.64 units; P =.071). In the Associative Analysis, eyes with 100% of visits with intraocular pressure less than 18 mm Hg over 6 years had mean changes from baseline in visual field defect score close to zero during follow-up, whereas eyes with less than 50% of visits with intraocular pressure less than 18 mm Hg had an estimated worsening over follow-up of 0.63 units of visual field defect score (P =.083). This amount of worsening was greater at 7 years (1.93 units; P <.001) than at 2 years (0.25 units; P =.572). In both analyses low intraocular pressure is associated with reduced progression of visual field defect, supporting evidence from earlier studies of a protective role for low intraocular pressure in visual field deterioration.
Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko
2014-01-01
The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations. PMID:25538637
Hügelschäfer, Sabine; Jaudas, Alexander; Achtziger, Anja
2016-10-15
Gender categorization is highly automatic. Studies measuring ERPs during the presentation of male and female faces in a categorization task showed that this categorization is extremely quick (around 130ms, indicated by the N170). We tested whether this automatic process can be controlled by goal intentions and implementation intentions. First, we replicated the N170 modulation on gender-incongruent faces as reported in previous research. This effect was only observed in a task in which faces had to be categorized according to gender, but not in a task that required responding to a visual feature added to the face stimuli (the color of a dot) while gender was irrelevant. Second, it turned out that the N170 modulation on gender-incongruent faces was altered if a goal intention was set that aimed at controlling a gender bias. We interpret this finding as an indicator of nonconscious goal pursuit. The N170 modulation was completely absent when this goal intention was furnished with an implementation intention. In contrast, intentions did not alter brain activity at a later time window (P300), which is associated with more complex and rather conscious processes. In line with previous research, the P300 was modulated by gender incongruency even if individuals were strongly involved in another task, demonstrating the automaticity of gender detection. We interpret our findings as evidence that automatic gender categorization that occurs at a very early processing stage can be effectively controlled by intentions. Copyright © 2016 Elsevier B.V. All rights reserved.
Miles, James D; Proctor, Robert W
2009-10-01
In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.
Lim, Sung-joo; Holt, Lori L
2011-01-01
Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.
Mixing Metaphors in the Cerebral Hemispheres: What Happens When Careers Collide?
Chettih, Selmaan; Durgin, Frank H.; Grodner, Daniel J.
2014-01-01
Are processes of figurative comparison and figurative categorization different? An experiment combining alternative-sense and matched-sense metaphor priming with a divided visual field assessment technique sought to isolate processes of comparison and categorization in the two cerebral hemispheres. For target metaphors presented in the RVF/LH, only matched-sense primes were facilitative. Literal primes and alternative-sense primes had no effect on comprehension time compared to the unprimed baseline. The effects of matched-sense primes were additive with the rated conventionality of the targets. For target metaphors presented to the LVF/RH, matched-sense primes were again additively facilitative. However, alternative-sense primes, though facilitative overall, seemed to eliminate the pre-existing advantages of conventional target metaphor senses in the LVF/RH in favor of metaphoric senses similar to those of the primes. These findings are consistent with tightly controlled categorical coding in the LH and coarse, and flexible, context dependent coding in the RH. PMID:22103787
Lim, Sung-joo; Holt, Lori L.
2011-01-01
Although speech categories are defined by multiple acoustic dimensions, some are perceptually-weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information and players’ responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5 hours across 5 days exhibited improvements in /r/-/l/ perception on par with 2–4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. PMID:21827533
Interhemispheric interaction in the split-brain.
Lambert, A J
1991-01-01
An experiment is reported in which a split-brain patient (LB) was simultaneously presented with two words, one to the left and one to the right of fixation. He was instructed to categorize the right sided word (living vs non-living), and to ignore anything appearing to the left of fixation. LB's performance on this task closely resembled that of normal neurologically intact individuals. Manual response speed was slower when the unattended (left visual field) word belonged to the same category as the right visual field word. Implications of this finding for views of the split-brain syndrome are discussed.
Women process multisensory emotion expressions more efficiently than men.
Collignon, O; Girard, S; Gosselin, F; Saint-Amour, D; Lepore, F; Lassonde, M
2010-01-01
Despite claims in the popular press, experiments investigating whether female are more efficient than male observers at processing expression of emotions produced inconsistent findings. In the present study, participants were asked to categorize fear and disgust expressions displayed auditorily, visually, or audio-visually. Results revealed an advantage of women in all the conditions of stimulus presentation. We also observed more nonlinear probabilistic summation in the bimodal conditions in female than male observers, indicating greater neural integration of different sensory-emotional informations. These findings indicate robust differences between genders in the multisensory perception of emotion expression.
Automatic motor activation in the executive control of action
McBride, Jennifer; Boy, Frédéric; Husain, Masud; Sumner, Petroc
2012-01-01
Although executive control and automatic behavior have often been considered separate and distinct processes, there is strong emerging and convergent evidence that they may in fact be intricately interlinked. In this review, we draw together evidence showing that visual stimuli cause automatic and unconscious motor activation, and how this in turn has implications for executive control. We discuss object affordances, alien limb syndrome, the visual grasp reflex, subliminal priming, and subliminal triggering of attentional orienting. Consideration of these findings suggests automatic motor activation might form an intrinsic part of all behavior, rather than being categorically different from voluntary actions. PMID:22536177
Bankson, B B; Hebart, M N; Groen, I I A; Baker, C I
2018-05-17
Visual object representations are commonly thought to emerge rapidly, yet it has remained unclear to what extent early brain responses reflect purely low-level visual features of these objects and how strongly those features contribute to later categorical or conceptual representations. Here, we aimed to estimate a lower temporal bound for the emergence of conceptual representations by defining two criteria that characterize such representations: 1) conceptual object representations should generalize across different exemplars of the same object, and 2) these representations should reflect high-level behavioral judgments. To test these criteria, we compared magnetoencephalography (MEG) recordings between two groups of participants (n = 16 per group) exposed to different exemplar images of the same object concepts. Further, we disentangled low-level from high-level MEG responses by estimating the unique and shared contribution of models of behavioral judgments, semantics, and different layers of deep neural networks of visual object processing. We find that 1) both generalization across exemplars as well as generalization of object-related signals across time increase after 150 ms, peaking around 230 ms; 2) representations specific to behavioral judgments emerged rapidly, peaking around 160 ms. Collectively, these results suggest a lower bound for the emergence of conceptual object representations around 150 ms following stimulus onset. Copyright © 2018 Elsevier Inc. All rights reserved.
Warfighter Visualizations Compilations
2013-05-01
list of the user’s favorite websites or other textual content, sub-categorized into types, such as blogs, social networking sites, comics , videos...available: The example in the prototype shows a random archived comic from the website. Other options include thumbnail strips of imagery or dynamic...varied, and range from serving as statistical benchmarks, for increasing social consciousness and interaction, for improving educational interactions
Effects of Variance and Input Distribution on the Training of L2 Learners' Tone Categorization
ERIC Educational Resources Information Center
Liu, Jiang
2013-01-01
Recent psycholinguistic findings showed that (a) a multi-modal phonetic training paradigm that encodes visual, interactive information is more effective in training L2 learners' perception of novel categories, (b) decreasing the acoustic variance of a phonetic dimension allows the learners to more effectively shift the perceptual weight towards…
ERIC Educational Resources Information Center
Gao, Zaifeng; Bentin, Shlomo
2011-01-01
Face perception studies investigated how spatial frequencies (SF) are extracted from retinal display while forming a perceptual representation, or their selective use during task-imposed categorization. Here we focused on the order of encoding low-spatial frequencies (LSF) and high-spatial frequencies (HSF) from perceptual representations into…
A Manual for Coding Descriptions, Interpretations, and Evaluations of Visual Art Forms.
ERIC Educational Resources Information Center
Acuff, Bette C.; Sieber-Suppes, Joan
This manual presents a system for categorizing stated esthetic responses to paintings. It is primarily a training manual for coders, but it may also be used for teaching reflective thinking skills and for evaluating programs of art education. The coding system contains 33 subdivisions of esthetic responses under three major categories: Cue…
Surgical Practical Skills Learning Curriculum: Implementation and Interns' Confidence Perceptions.
Acosta, Danilo; Castillo-Angeles, Manuel; Garces-Descovich, Alejandro; Watkins, Ammara A; Gupta, Alok; Critchlow, Jonathan F; Kent, Tara S
To provide an overview of the practical skills learning curriculum and assess its effects over time on the surgical interns' perceptions of their technical skills, patient management, administrative tasks, and knowledge. An 84-hour practical skills curriculum composed of didactic, simulation, and practical sessions was implemented during the 2015 to 2016 academic year for general surgery interns. Totally, 40% of the sessions were held during orientation, whereas the remainder sessions were held throughout the academic year. Interns' perceptions of their technical skills, administrative tasks, patient management, and knowledge were assessed by the practical skills curriculum residents' perception survey at various time points during their intern year (baseline, midpoint, and final). Interns were also asked to fill out an evaluation survey at the completion of each session to obtain feedback on the curriculum. General Surgery Residency program at a tertiary care academic institution. 20 General Surgery categorical and preliminary interns. Significant differences were found over time in interns' perceptions on their technical skills, patient management, administrative tasks, and knowledge (p < 0.001 for all). The results were also statistically significant when accounting for a prior boot camp course in medical school, intern status (categorical or preliminary), and gender (p < 0.05 for all). Differences in interns' perceptions occurred both from baseline to midpoint, and from midpoint to final time point evaluations (p < 0.001 for all). Prior surgical boot camp in medical school status, intern status (categorical vs. preliminary), and gender did not differ in the interns' baseline perceptions of their technical skills, patient management, administrative tasks, and knowledge (p > 0.05 for all). Implementation of a Practical Skills Curriculum in surgical internships can improve interns' confidence perception on their technical skills, patient management skills, administrative tasks, and knowledge. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Garcia-Belmonte, Germà
2017-06-01
Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.
Statistics of high-level scene context.
Greene, Michelle R
2013-01-01
CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.
Categorization of hyperspectral information (HSI) based on the distribution of spectra in hyperspace
NASA Astrophysics Data System (ADS)
Resmini, Ronald G.
2003-09-01
Hyperspectral information (HSI) data are commonly categorized by a description of the dominant physical geographic background captured in the image cube. In other words, HSI categorization is commonly based on a cursory, visual assessment of whether the data are of desert, forest, urban, littoral, jungle, alpine, etc., terrains. Additionally, often the design of HSI collection experiments is based on the acquisition of data of the various backgrounds or of objects of interest within the various terrain types. These data are for assessing and quantifying algorithm performance as well as for algorithm development activities. Here, results of an investigation into the validity of the backgrounds-driven mode of characterizing the diversity of hyperspectral data are presented. HSI data are described quantitatively, in the space where most algorithms operate: n-dimensional (n-D) hyperspace, where n is the number of bands in an HSI data cube. Nineteen metrics designed to probe hyperspace are applied to 14 HYDICE HSI data cubes that represent nine different backgrounds. Each of the 14 sets (one for each HYDICE cube) of 19 metric values was analyzed for clustering. With the present set of data and metrics, there is no clear, unambiguous break-out of metrics based on the nine different geographic backgrounds. The break-outs clump seemingly unrelated data types together; e.g., littoral and urban/residential. Most metrics are normally distributed and indicate no clustering; one metric is one outlier away from normal (i.e., two clusters); and five are comprised of two distributions (i.e., two clusters). Overall, there are three different break-outs that do not correspond to conventional background categories. Implications of these preliminary results are discussed as are recommendations for future work.
Faculty Best Practices to Support Students in the "Virtual Doctoral Land"
ERIC Educational Resources Information Center
Deshpande, Anant
2017-01-01
Online students face numerous challenges in successfully completing doctoral programmes. The aim of this article is to explore the best practices that can be employed by faculty to support students in achieving this. It also seeks to categorize and identify the best practices emerging from literature into themes. An exploratory research method was…
NASA Astrophysics Data System (ADS)
Duong, Tuan A.; Duong, Nghi; Le, Duong
2017-01-01
In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.
DISSECT: a new mnemonic-based approach to the categorization of aortic dissection.
Dake, M D; Thompson, M; van Sambeek, M; Vermassen, F; Morales, J P
2013-08-01
Classification systems for aortic dissection provide important guides to clinical decision-making, but the relevance of traditional categorization schemes is being questioned in an era when endovascular techniques are assuming a growing role in the management of this frequently complex and catastrophic entity. In recognition of the expanding range of interventional therapies now used as alternatives to conventional treatment approaches, the Working Group on Aortic Diseases of the DEFINE Project developed a categorization system that features the specific anatomic and clinical manifestations of the disease process that are most relevant to contemporary decision-making. The DISSECT classification system is a mnemonic-based approach to the evaluation of aortic dissection. It guides clinicians through an assessment of six critical characteristics that facilitate optimal communication of the most salient details that currently influence the selection of a therapeutic option, including those findings that are key when considering an endovascular procedure, but are not taken into account by the DeBakey or Stanford categorization schemes. The six features of aortic dissection include: duration of disease; intimal tear location; size of the dissected aorta; segmental extent of aortic involvement; clinical complications of the dissection, and thrombus within the aortic false lumen. In current clinical practice, endovascular therapy is increasingly considered as an alternative to medical management or open surgical repair in select cases of type B aortic dissection. Currently, endovascular aortic repair is not used for patients with type A aortic dissection, but catheter-based techniques directed at peripheral branch vessel ischemia that may complicate type A dissection are considered valuable adjunctive interventions, when indicated. The use of a new system for categorization of aortic dissection, DISSECT, addresses the shortcomings of well-known established schemes devised more than 40 years ago, before the introduction of endovascular techniques. It will serve as a guide to support a critical analysis of contemporary therapeutic options and inform management decisions based on specific features of the disease process. Copyright © 2013 European Society for Vascular Surgery. All rights reserved.
Crews, John E.; Chou, Chiu-Fang; Zack, Matthew M.; Zhang, Xinzhi; Bullard, Kai McKeever; Morse, Alan R.; Saaddine, Jinan B.
2016-01-01
Purpose To examine the association of health-related quality of life (HRQoL) with severity of visual impairment among people aged 40–64 years. Methods We used cross-sectional data from the 2006–2010 Behavioral Risk Factor Surveillance System to examine six measures of HRQoL: self-reported health, physically unhealthy days, mentally unhealthy days, activity limitation days, life satisfaction, and disability. Visual impairment was categorized as no, a little, or moderate/severe. We examined the association between visual impairment and HRQoL using logistic regression accounting for the survey’s complex design. Results Overall, 23.0% of the participants reported a little difficult seeing, while 16.8% reported moderate/severe difficulty seeing. People aged 40–64 years with moderate/severe visual impairment had more frequent (≥14) physically unhealthy days, mentally unhealthy days, and activity limitation days in the last 30 days, as well as greater life dissatisfaction, greater disability, and poorer health compared to people reporting no or a little visual impairment. After controlling for covariates (age, sex, marital status, race/ethnicity, education, income, state, year, health insurance, heart disease, stroke, heart attack, body mass index, leisure-time activity, smoking, and medical care costs), and compared to people with no visual impairment, those with moderate/severe visual impairment were more likely to have fair/poor health (odds ratio, OR, 2.01, 95% confidence interval, CI, 1.82–2.23), life dissatisfaction (OR 2.06, 95% CI 1.80–2.35), disability (OR 1.95, 95% CI 1.80–2.13), and frequent physically unhealthy days (OR 1.69, 95% CI 1.52–1.88), mentally unhealthy days (OR 1.84, 95% CI 1.66–2.05), and activity limitation days (OR 1.94, 95% CI 1.71–2.20; all p < 0.0001). Conclusion Poor HRQoL was strongly associated with moderate/severe visual impairment among people aged 40–64 years. PMID:27159347
Multifocal visual evoked potentials for early glaucoma detection.
Weizer, Jennifer S; Musch, David C; Niziol, Leslie M; Khan, Naheed W
2012-07-01
To compare multifocal visual evoked potentials (mfVEP) with other detection methods in early open-angle glaucoma. Ten patients with suspected glaucoma and 5 with early open-angle glaucoma underwent mfVEP, standard automated perimetry (SAP), short-wave automated perimetry, frequency-doubling technology perimetry, and nerve fiber layer optical coherence tomography. Nineteen healthy control subjects underwent mfVEP and SAP for comparison. Comparisons between groups involving continuous variables were made using independent t tests; for categorical variables, Fisher's exact test was used. Monocular mfVEP cluster defects were associated with an increased SAP pattern standard deviation (P = .0195). Visual fields that showed interocular mfVEP cluster defects were more likely to also show superior quadrant nerve fiber layer thinning by OCT (P = .0152). Multifocal visual evoked potential cluster defects are associated with a functional and an anatomic measure that both relate to glaucomatous optic neuropathy. Copyright 2012, SLACK Incorporated.
Authoritarianism, cognitive rigidity, and the processing of ambiguous visual information.
Duncan, Lauren E; Peterson, Bill E
2014-01-01
Intolerance of ambiguity and cognitive rigidity are unifying aspects of authoritarianism as defined by Adorno, Frenkel-Brunswik, Levinson, and Sanford (1982/1950), who hypothesized that authoritarians view the world in absolute terms (e.g., good or evil). Past studies have documented the relationship between authoritarianism and intolerance of ambiguity and rigidity. Frenkel-Brunswik (1949) hypothesized that this desire for absolutism was rooted in perceptual processes. We present a study with three samples that directly tests the relationship between right wing authoritarianism (RWA) and the processing of ideologically neutral but ambiguous visual stimuli. As hypothesized, in all three samples we found that RWA was related to the slower processing of visual information that required participants to recategorize objects. In a fourth sample, RWA was unrelated to speed of processing visual information that did not require recategorization. Overall, results suggest a relationship between RWA and rigidity in categorization.
Short temporal asynchrony disrupts visual object recognition
Singer, Jedediah M.; Kreiman, Gabriel
2014-01-01
Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738
New Classification of Focal Cortical Dysplasia: Application to Practical Diagnosis
Bae, Yoon-Sung; Kang, Hoon-Chul; Kim, Heung Dong; Kim, Se Hoon
2012-01-01
Background and Purpose: Malformation of cortical development (MCD) is a well-known cause of drug-resistant epilepsy and focal cortical dysplasia (FCD) is the most common neuropathological finding in surgical specimens from drug-resistant epilepsy patients. Palmini’s classification proposed in 2004 is now widely used to categorize FCD. Recently, however, Blumcke et al. recommended a new system for classifying FCD in 2011. Methods: We applied the new classification system in practical diagnosis of a sample of 117 patients who underwent neurosurgical operations due to drug-resistant epilepsy at Severance Hospital in Seoul, Korea. Results: Among 117 cases, a total of 16 cases were shifted to other FCD subtypes under the new classification system. Five cases were reclassified to type IIIa and five cases were categorized as dual pathology. The other six cases were changed within the type I category. Conclusions: The most remarkable changes in the new classification system are the advent of dual pathology and FCD type III. Thus, it will be very important for pathologists and clinicians to discriminate between these new categories. More large-scale research needs to be conducted to elucidate the clinical influence of the alterations within the classification of type I disease. Although the new FCD classification system has several advantages compared to the former, the correlation with clinical characteristics is not yet clear. PMID:24649461
Promoting mental health competency in residency training.
Bauer, Nerissa S; Sullivan, Paula D; Hus, Anna M; Downs, Stephen M
2011-12-01
To evaluate the effect our developmental-behavioral pediatrics (DBP) curricular model had on residents' comfort with handling mental health issues. From August 2007 to January 2010, residents participating in the Indiana University DBP rotation completed a self-assessment questionnaire at baseline and at rotation end. Residents rated their comfort with the identification, treatment, and counseling of mental health problems using a 5-point scale. Ninety-four residents completed both self-assessments. At baseline, categorical pediatric residents possessed higher comfort levels toward identification (mean 2.8 vs. 2.3 for non-categorical pediatrics residents, p<0.05), treatment (2.6 vs. 2.2, p<0.05) and counseling of mental health issues (2.7 vs. 2.1, p<0.005). Residents who were parents were also more comfortable. At rotation end, all residents showed significant improvements in self-rated comfort (4.0 vs. 2.6 for identification, p≤0.05; 4.0 vs. 2.4 for treatment, p≤0.05; and 4.0 vs. 2.4 for counseling, p≤0.05). This remained true regardless of being a categorical pediatric resident, a parent, or primary care-oriented. Our curricular model promotes residents' comfort with handling common mental health issues in practice. Increasing residents' comfort may influence the frequency of active discussion of mental health issues during well-child visits and lead to earlier diagnosis and needed treatment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Visualizing common operating picture of critical infrastructure
NASA Astrophysics Data System (ADS)
Rummukainen, Lauri; Oksama, Lauri; Timonen, Jussi; Vankka, Jouko
2014-05-01
This paper presents a solution for visualizing the common operating picture (COP) of the critical infrastructure (CI). The purpose is to improve the situational awareness (SA) of the strategic-level actor and the source system operator in order to support decision making. The information is obtained through the Situational Awareness of Critical Infrastructure and Networks (SACIN) framework. The system consists of an agent-based solution for gathering, storing, and analyzing the information, and a user interface (UI) is presented in this paper. The UI consists of multiple views visualizing information from the CI in different ways. Different CI actors are categorized in 11 separate sectors, and events are used to present meaningful incidents. Past and current states, together with geographical distribution and logical dependencies, are presented to the user. The current states are visualized as segmented circles to represent event categories. Geographical distribution of assets is displayed with a well-known map tool. Logical dependencies are presented in a simple directed graph, and users also have a timeline to review past events. The objective of the UI is to provide an easily understandable overview of the CI status. Therefore, testing methods, such as a walkthrough, an informal walkthrough, and the Situation Awareness Global Assessment Technique (SAGAT), were used in the evaluation of the UI. Results showed that users were able to obtain an understanding of the current state of CI, and the usability of the UI was rated as good. In particular, the designated display for the CI overview and the timeline were found to be efficient.
Comparison between visual field defect in pigmentary glaucoma and primary open-angle glaucoma.
Nilforushan, Naveed; Yadgari, Maryam; Jazayeri, Anisalsadat
2016-10-01
To compare visual field defect patterns between pigmentary glaucoma and primary open-angle glaucoma. Retrospective, comparative study. Patients with diagnosis of primary open-angle glaucoma (POAG) and pigmentary glaucoma (PG) in mild to moderate stages were enrolled in this study. Each of the 52 point locations in total and pattern deviation plot (excluding 2 points adjacent to blind spot) of 24-2 Humphrey visual field as well as six predetermined sectors were compared using SPSS software version 20. Comparisons between 2 groups were performed with the Student t test for continuous variables and the Chi-square test for categorical variables. Thirty-eight eyes of 24 patients with a mean age of 66.26 ± 11 years (range 48-81 years) in the POAG group and 36 eyes of 22 patients with a mean age of 50.52 ± 11 years (range 36-69 years) in the PG group were studied. (P = 0.00). More deviation was detected in points 1, 3, 4, and 32 in total deviation (P = 0.03, P = 0.015, P = 0.018, P = 0.023) and in points 3, 4, and 32 in pattern deviation (P = 0.015, P = 0.049, P = 0.030) in the POAG group, which are the temporal parts of the field. It seems that the temporal area of the visual field in primary open-angle glaucoma is more susceptible to damage in comparison with pigmentary glaucoma.
Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R
2010-01-01
Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.
Categorical data processing for real estate objects valuation using statistical analysis
NASA Astrophysics Data System (ADS)
Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.
2018-05-01
Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.
The practice of optometry: National Board of Examiners in Optometry survey of optometric patients.
Soroka, Mort; Krumholz, David; Bennett, Amy
2006-09-01
A study commissioned by the National Board of Examiners in Optometry was designed to obtain information about patients seen in general practice. Providers completed an encounter form for patients seen during a 2-day sample. Data were obtained from 11,012 patients in rural, urban, and suburban environments from a diverse population of 480 optometrists representative of profession-wide practitioners in terms of geographic distribution and practice settings. Although practitioners were selected randomly, the response rate among those who were invited to participate was only 17.7%. Optometrists who specialized and did not classify themselves as general practitioners were excluded from the study. The study provides insights into the most common diagnostic and therapeutic procedures performed, medications prescribed, and referrals made in general practices. Seventy-one percent of all examinations were categorized as comprehensive, approximately 13% were because of disease, and 11% were for contact lens care. Almost 17% of all patients received a formal visual field test (Goldmann or automated). Refractive error was the most prevalent diagnosis, reflective of the ocular problems found in the general population, and systemic conditions were the second largest category. Although 12% of all patients were referred to an ophthalmologist for further care, other types of referrals were infrequent. Referrals to a primary care physician, laboratory, and imaging or for refractive surgery accounted for only 8% of all referrals. Ocular disease treatment was found to be an integral part of the optometrist's practice. Prescribing topical medications, both legend and "over-the-counter," was a primary treatment option. The most common medications prescribed were for glaucoma, with antibiotics, anti-inflammatory and anti-allergy drops making up the remainder, in descending order.
McGurk illusion recalibrates subsequent auditory perception
Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.
2016-01-01
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960
Classification and simulation of stereoscopic artifacts in mobile 3DTV content
NASA Astrophysics Data System (ADS)
Boev, Atanas; Hollosi, Danilo; Gotchev, Atanas; Egiazarian, Karen
2009-02-01
We identify, categorize and simulate artifacts which might occur during delivery stereoscopic video to mobile devices. We consider the stages of 3D video delivery dataflow: content creation, conversion to the desired format (multiview or source-plus-depth), coding/decoding, transmission, and visualization on 3D display. Human 3D vision works by assessing various depth cues - accommodation, binocular depth cues, pictorial cues and motion parallax. As a consequence any artifact which modifies these cues impairs the quality of a 3D scene. The perceptibility of each artifact can be estimated through subjective tests. The material for such tests needs to contain various artifacts with different amounts of impairment. We present a system for simulation of these artifacts. The artifacts are organized in groups with similar origins, and each group is simulated by a block in a simulation channel. The channel introduces the following groups of artifacts: sensor limitations, geometric distortions caused by camera optics, spatial and temporal misalignments between video channels, spatial and temporal artifacts caused by coding, transmission losses, and visualization artifacts. For the case of source-plus-depth representation, artifacts caused by format conversion are added as well.
Fandom Biases Retrospective Judgments Not Perception.
Huff, Markus; Papenmeier, Frank; Maurer, Annika E; Meitz, Tino G K; Garsoffky, Bärbel; Schwan, Stephan
2017-02-24
Attitudes and motivations have been shown to affect the processing of visual input, indicating that observers may see a given situation each literally in a different way. Yet, in real-life, processing information in an unbiased manner is considered to be of high adaptive value. Attitudinal and motivational effects were found for attention, characterization, categorization, and memory. On the other hand, for dynamic real-life events, visual processing has been found to be highly synchronous among viewers. Thus, while in a seminal study fandom as a particularly strong case of attitudes did bias judgments of a sports event, it left the question open whether attitudes do bias prior processing stages. Here, we investigated influences of fandom during the live TV broadcasting of the 2013 UEFA-Champions-League Final regarding attention, event segmentation, immediate and delayed cued recall, as well as affect, memory confidence, and retrospective judgments. Even though we replicated biased retrospective judgments, we found that eye-movements, event segmentation, and cued recall were largely similar across both groups of fans. Our findings demonstrate that, while highly involving sports events are interpreted in a fan dependent way, at initial stages they are processed in an unbiased manner.
Fandom Biases Retrospective Judgments Not Perception
Huff, Markus; Papenmeier, Frank; Maurer, Annika E.; Meitz, Tino G. K.; Garsoffky, Bärbel; Schwan, Stephan
2017-01-01
Attitudes and motivations have been shown to affect the processing of visual input, indicating that observers may see a given situation each literally in a different way. Yet, in real-life, processing information in an unbiased manner is considered to be of high adaptive value. Attitudinal and motivational effects were found for attention, characterization, categorization, and memory. On the other hand, for dynamic real-life events, visual processing has been found to be highly synchronous among viewers. Thus, while in a seminal study fandom as a particularly strong case of attitudes did bias judgments of a sports event, it left the question open whether attitudes do bias prior processing stages. Here, we investigated influences of fandom during the live TV broadcasting of the 2013 UEFA-Champions-League Final regarding attention, event segmentation, immediate and delayed cued recall, as well as affect, memory confidence, and retrospective judgments. Even though we replicated biased retrospective judgments, we found that eye-movements, event segmentation, and cued recall were largely similar across both groups of fans. Our findings demonstrate that, while highly involving sports events are interpreted in a fan dependent way, at initial stages they are processed in an unbiased manner. PMID:28233877
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Teachers' in-flight thinking in inclusive classrooms.
Paterson, David
2007-01-01
This article explores the thinking of five junior high school teachers as they teach students with learning difficulties in inclusive classrooms. Insights into the ways these teachers think about students in these inclusive secondary school contexts were obtained through triangulating data from semistructured interviews, stimulated recall of in-flight thinking, and researcher field notes. Exploration of teachers' in-flight thinking (i.e., the thinking of teachers as they engaged in classroom teaching) revealed a knowledge of individual students that was not related to categorical notions of learning difficulties. This research has implications for the practice of teaching in inclusive settings as well as for teacher preparation. Specifically, it suggests that attention to student differences should be replaced by the development of teachers' knowledge about individual students as a rich source of practical knowledge and the basis for developing effective instructional techniques.
The role of visualization in learning from computer-based images
NASA Astrophysics Data System (ADS)
Piburn, Michael D.; Reynolds, Stephen J.; McAuliffe, Carla; Leedy, Debra E.; Birk, James P.; Johnson, Julia K.
2005-05-01
Among the sciences, the practice of geology is especially visual. To assess the role of spatial ability in learning geology, we designed an experiment using: (1) web-based versions of spatial visualization tests, (2) a geospatial test, and (3) multimedia instructional modules built around QuickTime Virtual Reality movies. Students in control and experimental sections were administered measures of spatial orientation and visualization, as well as a content-based geospatial examination. All subjects improved significantly in their scores on spatial visualization and the geospatial examination. There was no change in their scores on spatial orientation. A three-way analysis of variance, with the geospatial examination as the dependent variable, revealed significant main effects favoring the experimental group and a significant interaction between treatment and gender. These results demonstrate that spatial ability can be improved through instruction, that learning of geological content will improve as a result, and that differences in performance between the genders can be eliminated.
ERIC Educational Resources Information Center
Buldu, Mehmet; Shaban, Mohamed S.
2010-01-01
This study portrayed a picture of kindergarten through 3rd-grade teachers who teach visual arts, their perceptions of the value of visual arts, their visual arts teaching practices, visual arts experiences provided to young learners in school, and major factors and/or influences that affect their teaching of visual arts. The sample for this study…
Learning Opportunities in Rheumatology Practice: A Qualitative Study
ERIC Educational Resources Information Center
Neher, Margit Saskia; Ståhl, Christian; Nilsen, Per
2015-01-01
Purpose: This paper aims to explore what opportunities for learning practitioners in rheumatology perceive of in their daily practice, using a typology of workplace learning to categorize these opportunities. Design/methodology/approach: Thirty-six practitioners from different professions in rheumatology were interviewed. Data were analyzed using…
Map visualization of groundwater withdrawals at the sub-basin scale
Goode, Daniel J.
2016-01-01
A simple method is proposed to visualize the magnitude of groundwater withdrawals from wells relative to user-defined water-resource metrics. The map is solely an illustration of the withdrawal magnitudes, spatially centered on wells—it is not capture zones or source areas contributing recharge to wells. Common practice is to scale the size (area) of withdrawal well symbols proportional to pumping rate. Symbols are drawn large enough to be visible, but not so large that they overlap excessively. In contrast to such graphics-based symbol sizes, the proposed method uses a depth-rate index (length per time) to visualize the well withdrawal rates by volumetrically consistent areas, called “footprints”. The area of each individual well’s footprint is the withdrawal rate divided by the depth-rate index. For example, the groundwater recharge rate could be used as a depth-rate index to show how large withdrawals are relative to that recharge. To account for the interference of nearby wells, composite footprints are computed by iterative nearest-neighbor distribution of excess withdrawals on a computational and display grid having uniform square cells. The map shows circular footprints at individual isolated wells and merged footprint areas where wells’ individual footprints overlap. Examples are presented for depth-rate indexes corresponding to recharge, to spatially variable stream baseflow (normalized by basin area), and to the average rate of water-table decline (scaled by specific yield). These depth-rate indexes are water-resource metrics, and the footprints visualize the magnitude of withdrawals relative to these metrics.
Differences in children and adolescents' ability of reporting two CVS-related visual problems.
Hu, Liang; Yan, Zheng; Ye, Tiantian; Lu, Fan; Xu, Peng; Chen, Hao
2013-01-01
The present study examined whether children and adolescents can correctly report dry eyes and blurred distance vision, two visual problems associated with computer vision syndrome. Participants are 913 children and adolescents aged 6-17. They were asked to report their visual problems, including dry eyes and blurred distance vision, and received an eye examination, including tear film break-up time (TFBUT) and visual acuity (VA). Inconsistency was found between participants' reports of dry eyes and TFBUT results among all 913 participants as well as for all of four subgroups. In contrast, consistency was found between participants' reports of blurred distance vision and VA results among 873 participants who had never worn glasses as well as for the four subgroups. It was concluded that children and adolescents are unable to report dry eyes correctly; however, they are able to report blurred distance vision correctly. Three practical implications of the findings were discussed. Little is known about children's ability to report their visual problems, an issue critical to diagnosis and treatment of children's computer vision syndrome. This study compared children's self-reports and clinic examination results and found children can correctly report blurred distance vision but not dry eyes.
Light and the laboratory mouse.
Peirson, Stuart N; Brown, Laurence A; Pothecary, Carina A; Benson, Lindsay A; Fisk, Angus S
2018-04-15
Light exerts widespread effects on physiology and behaviour. As well as the widely-appreciated role of light in vision, light also plays a critical role in many non-visual responses, including regulating circadian rhythms, sleep, pupil constriction, heart rate, hormone release and learning and memory. In mammals, responses to light are all mediated via retinal photoreceptors, including the classical rods and cones involved in vision as well as the recently identified melanopsin-expressing photoreceptive retinal ganglion cells (pRGCs). Understanding the effects of light on the laboratory mouse therefore depends upon an appreciation of the physiology of these retinal photoreceptors, including their differing sens itivities to absolute light levels and wavelengths. The signals from these photoreceptors are often integrated, with different responses involving distinct retinal projections, making generalisations challenging. Furthermore, many commonly used laboratory mouse strains carry mutations that affect visual or non-visual physiology, ranging from inherited retinal degeneration to genetic differences in sleep and circadian rhythms. Here we provide an overview of the visual and non-visual systems before discussing practical considerations for the use of light for researchers and animal facility staff working with laboratory mice. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Feiker Hollenbeck, Amy R.
2013-01-01
This investigation extends the study of the reading comprehension practices used with students with learning disabilities (LD) via a case study, exploring the beliefs and practices in reading comprehension of "Wendy," a cross-categorical special educator nominated as effective in her work with sixth-grade students. Wendy's practices serve as a…
ERIC Educational Resources Information Center
McMullen, Mary Benson; Elicker, James; Goetze, Giselle; Huang, Hsin-Hui; Lee, Sun-Mi; Mathers, Carrie; Wen, Xiaoli; Yang, Heayoung
2006-01-01
A team of researchers used a collaborative assessment protocol to compare the self-reported teaching beliefs of a convenience sample of preschool teachers (N = 57) to their documentable practices (i.e., practices that could be observed, recorded, and categorized using a deductive strategy). Data were examined from survey instruments, detailed…
NASA Astrophysics Data System (ADS)
Chee, T.; Nguyen, L.; Smith, W. L., Jr.; Spangenberg, D.; Palikonda, R.; Bedka, K. M.; Minnis, P.; Thieman, M. M.; Nordeen, M.
2017-12-01
Providing public access to research products including cloud macro and microphysical properties and satellite imagery are a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a web based visualization tool and API that allows end users to easily create customized cloud product and satellite imagery, ground site data and satellite ground track information that is generated dynamically. The tool has two uses, one to visualize the dynamically created imagery and the other to provide access to the dynamically generated imagery directly at a later time. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud to accommodate scalability issues. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information, satellite imagery, ground site data and satellite track information accessible and easily searchable. This tool is the culmination of our prior experience with dynamic imagery generation and provides a way to build a "mash-up" of dynamically generated imagery and related kinds of information that are visualized together to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much scientific knowledge, observations and products available to the citizen science, research and interested communities as well as for automated systems to acquire the same information for data mining or other analytic purposes. This tool and the underlying API's provide a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.
Park, George D; Reed, Catherine L
2015-10-01
Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.
Categorical encoding of color in the brain
Bird, Chris M.; Berens, Samuel C.; Horner, Aidan J.; Franklin, Anna
2014-01-01
The areas of the brain that encode color categorically have not yet been reliably identified. Here, we used functional MRI adaptation to identify neuronal populations that represent color categories irrespective of metric differences in color. Two colors were successively presented within a block of trials. The two colors were either from the same or different categories (e.g., “blue 1 and blue 2” or “blue 1 and green 1”), and the size of the hue difference was varied. Participants performed a target detection task unrelated to the difference in color. In the middle frontal gyrus of both hemispheres and to a lesser extent, the cerebellum, blood-oxygen level-dependent response was greater for colors from different categories relative to colors from the same category. Importantly, activation in these regions was not modulated by the size of the hue difference, suggesting that neurons in these regions represent color categorically, regardless of metric color difference. Representational similarity analyses, which investigated the similarity of the pattern of activity across local groups of voxels, identified other regions of the brain (including the visual cortex), which responded to metric but not categorical color differences. Therefore, categorical and metric hue differences appear to be coded in qualitatively different ways and in different brain regions. These findings have implications for the long-standing debate on the origin and nature of color categories, and also further our understanding of how color is processed by the brain. PMID:24591602
Multi-sensory landscape assessment: the contribution of acoustic perception to landscape evaluation.
Gan, Yonghong; Luo, Tao; Breitung, Werner; Kang, Jian; Zhang, Tianhai
2014-12-01
In this paper, the contribution of visual and acoustic preference to multi-sensory landscape evaluation was quantitatively compared. The real landscapes were treated as dual-sensory ambiance and separated into visual landscape and soundscape. Both were evaluated by 63 respondents in laboratory conditions. The analysis of the relationship between respondent's visual and acoustic preference as well as their respective contribution to landscape preference showed that (1) some common attributes are universally identified in assessing visual, aural and audio-visual preference, such as naturalness or degree of human disturbance; (2) with acoustic and visual preferences as variables, a multi-variate linear regression model can satisfactorily predict landscape preference (R(2 )= 0.740), while the coefficients of determination for a unitary linear regression model were 0.345 and 0.720 for visual and acoustic preference as predicting factors, respectively; (3) acoustic preference played a much more important role in landscape evaluation than visual preference in this study (the former is about 4.5 times of the latter), which strongly suggests a rethinking of the role of soundscape in environment perception research and landscape planning practice.
NASA Astrophysics Data System (ADS)
Mirel, Barbara; Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan
2016-02-01
Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students' visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students' successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules.
Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan
2016-01-01
Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students’ visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students’ successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules. PMID:26877625
Social Categorization on Perception Bias in the Practice of Microteaching
NASA Astrophysics Data System (ADS)
Hong, Jon-Chao; Hwang, Ming-Yueh; Lu, Chow-Chin; Tsai, Chi-Ruei
2017-02-01
Microteaching has gained considerable attention for its effectiveness in rapid and contextual training in professional development programs. However, the interpretive quality of the teaching demonstration and peer feedback may influence individuals' attribution and self-correction, leading to ineffective learning. In this study, a microteaching workshop in a professional development program for 78 elementary school science teachers was investigated. The results showed that the effectiveness of microteaching was negatively affected by participants' perception bias due to social categorization. Moreover, it was indicated that the participants' perception of the in-group and out-group, classified by the degree of the individuals' science knowledge, fostered social categorization. Participants tended to experience perception conflicts caused by their inability to see personal faults, and a typical perception bias of "seeing one's own strengths and seeing others' shortcomings" was more frequently recognized in the out-group. These results converge to highlight the importance of social categorization in perception bias relevant to microteaching.
Color categories are not universal: new evidence from traditional and western cultures
NASA Astrophysics Data System (ADS)
Roberson, Debi D.; Davidoff, Jules; Davies, Ian R. L.
2002-06-01
Evidence presented supports the linguistic relativity of color categories in three different paradigms. Firstly, a series of cross-cultural investigations, which had set out to replicate the seminal work of Rosch Heider with the Dani of New Guinea, failed to find evidence of a set of universal color categories. Instead, we found evidence of linguistic relativity in both populations tested. Neither participants from a Melanesian hunter-gatherer culture, nor those from an African pastoral tribe, whose languages both contain five color terms, showed a cognitive organization of color resembling that of English speakers. Further, Melanesian participants showed evidence of Categorical Perception, but only at their linguistic category boundaries. Secondly, in native English speakers verbal interference was found to selectively remove the defining features of Categorical Perception. Under verbal interference, the greater accuracy normally observed for cross-category judgements compared to within-category judgements disappeared. While both visual and verbal codes may be employed in the recognition memory of colors, participants only make use of verbal coding when demonstrating Categorical Perception. Thirdly, in a brain- damaged patient suffering from a naming disorder, the loss of labels radically impaired his ability to categorize colors. We conclude that language affects both the perception of and memory for colors.
The use of higher-order statistics in rapid object categorization in natural scenes.
Banno, Hayaki; Saiki, Jun
2015-02-04
We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.
ERIC Educational Resources Information Center
Bouaffre, Sarah; Faita-Ainseba, Frederique
2007-01-01
To investigate hemispheric differences in the timing of word priming, the modulation of event-related potentials by semantic word relationships was examined in each cerebral hemisphere. Primes and targets, either categorically (silk-wool) or associatively (needle-sewing) related, were presented to the left or right visual field in a go/no-go…
ERIC Educational Resources Information Center
Langguth, Berthold; Juttner, Martin; Landis, Theodor; Regard, Marianne; Rentschler, Ingo
2009-01-01
Hemispheric differences in the learning and generalization of pattern categories were explored in two experiments involving sixteen patients with unilateral posterior, cerebral lesions in the left (LH) or right (RH) hemisphere. In each experiment participants were first trained to criterion in a supervised learning paradigm to categorize a set of…
The Cost of Switching Language in a Semantic Categorization Task.
ERIC Educational Resources Information Center
von Studnitz, Roswitha E.; Green, David W.
2002-01-01
Presents a study in which German-English bilinguals decided whether a visually presented word, either German or English, referred to an animate or to an inanimate entity. Bilinguals were slower to respond on a language switch trial than on language non-switch trials but only if they had to make the same response as on the prior trial. (Author/VWL)
An Examination of the Domain of Multivariable Functions Using the Pirie-Kieren Model
ERIC Educational Resources Information Center
Sengul, Sare; Yildiz, Sevda Goktepe
2016-01-01
The aim of this study is to employ the Pirie-Kieren model so as to examine the understandings relating to the domain of multivariable functions held by primary school mathematics preservice teachers. The data obtained was categorized according to Pirie-Kieren model and demonstrated visually in tables and bar charts. The study group consisted of…