Sample records for audiovisual semantic congruency

  1. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  2. The effects of semantic congruency: a research of audiovisual P300-speller.

    PubMed

    Cao, Yong; An, Xingwei; Ke, Yufeng; Jiang, Jin; Yang, Hanjun; Chen, Yuqian; Jiao, Xuejun; Qi, Hongzhi; Ming, Dong

    2017-07-25

    Over the past few decades, there have been many studies of aspects of brain-computer interface (BCI). Of particular interests are event-related potential (ERP)-based BCI spellers that aim at helping mental typewriting. Nowadays, audiovisual unimodal stimuli based BCI systems have attracted much attention from researchers, and most of the existing studies of audiovisual BCIs were based on semantic incongruent stimuli paradigm. However, no related studies had reported that whether there is difference of system performance or participant comfort between BCI based on semantic congruent paradigm and that based on semantic incongruent paradigm. The goal of this study was to investigate the effects of semantic congruency in system performance and participant comfort in audiovisual BCI. Two audiovisual paradigms (semantic congruent and incongruent) were adopted, and 11 healthy subjects participated in the experiment. High-density electrical mapping of ERPs and behavioral data were measured for the two stimuli paradigms. The behavioral data indicated no significant difference between congruent and incongruent paradigms for offline classification accuracy. Nevertheless, eight of the 11 participants reported their priority to semantic congruent experiment, two reported no difference between the two conditions, and only one preferred the semantic incongruent paradigm. Besides, the result indicted that higher amplitude of ERP was found in incongruent stimuli based paradigm. In a word, semantic congruent paradigm had a better participant comfort, and maintained the same recognition rate as incongruent paradigm. Furthermore, our study suggested that the paradigm design of spellers must take both system performance and user experience into consideration rather than merely pursuing a larger ERP response.

  3. School-aged children can benefit from audiovisual semantic congruency during memory encoding.

    PubMed

    Heikkilä, Jenni; Tiippana, Kaisa

    2016-05-01

    Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.

  4. Audiovisual semantic congruency during encoding enhances memory performance.

    PubMed

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  5. Age-Related Differences in Audiovisual Interactions of Semantically Different Stimuli

    ERIC Educational Resources Information Center

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of…

  6. Age-related differences in audiovisual interactions of semantically different stimuli.

    PubMed

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Semantic congruency and the (reversed) Colavita effect in children and adults.

    PubMed

    Wille, Claudia; Ebersbach, Mirjam

    2016-01-01

    When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  9. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  10. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex.

    PubMed

    van Atteveldt, Nienke M; Blau, Vera C; Blomert, Leo; Goebel, Rainer

    2010-02-02

    Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and

  11. Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.

    PubMed

    Kanaya, Shoko; Yokosawa, Kazuhiko

    2011-02-01

    Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.

  12. The role of semantic self-perceptions in temporal distance perceptions toward autobiographical events: the semantic congruence model.

    PubMed

    Gebauer, Jochen E; Haddock, Geoffrey; Broemer, Philip; von Hecker, Ulrich

    2013-11-01

    Why do some autobiographical events feel as if they happened yesterday, whereas others feel like ancient history? Such temporal distance perceptions have surprisingly little to do with actual calendar time distance. Instead, psychologists have found that people typically perceive positive autobiographical events as overly recent, while perceiving negative events as overly distant. The origins of this temporal distance bias have been sought in self-enhancement strivings and mood congruence between autobiographical events and chronic mood. As such, past research exclusively focused on the evaluative features of autobiographical events, while neglecting semantic features. To close this gap, we introduce a semantic congruence model. Capitalizing on the Big Two self-perception dimensions, Study 1 showed that high semantic congruence between recalled autobiographical events and trait self-perceptions render the recalled events subjectively recent. Specifically, interpersonally warm (competent) individuals perceived autobiographical events reflecting warmth (competence) as relatively recent, but warm (competent) individuals did not perceive events reflecting competence (warmth) as relatively recent. Study 2 found that conscious perceptions of congruence mediate these effects. Studies 3 and 4 showed that neither mood congruence nor self-enhancement account for these results. Study 5 extended the results from the Big Two to the Big Five self-perception dimensions, while affirming the independence of the semantic congruence model from evaluative influences. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  14. Reproducibility and Discriminability of Brain Patterns of Semantic Categories Enhanced by Congruent Audiovisual Stimuli

    PubMed Central

    Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: “old people” and “young people.” These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration. PMID:21750692

  15. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    PubMed

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both

  16. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults

    PubMed Central

    Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when

  17. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-06-19

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.

  18. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  19. Voice over: Audio-visual congruency and content recall in the gallery setting

    PubMed Central

    Fairhurst, Merle T.; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues. PMID:28636667

  20. Voice over: Audio-visual congruency and content recall in the gallery setting.

    PubMed

    Fairhurst, Merle T; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

  1. Delineating the Effect of Semantic Congruency on Episodic Memory: The Role of Integration and Relatedness

    PubMed Central

    Bein, Oded; Livneh, Neta; Reggev, Niv; Gilead, Michael; Goshen-Gottstein, Yonatan; Maril, Anat

    2015-01-01

    A fundamental challenge in the study of learning and memory is to understand the role of existing knowledge in the encoding and retrieval of new episodic information. The importance of prior knowledge in memory is demonstrated in the congruency effect—the robust finding wherein participants display better memory for items that are compatible, rather than incompatible, with their pre-existing semantic knowledge. Despite its robustness, the mechanism underlying this effect is not well understood. In four studies, we provide evidence that demonstrates the privileged explanatory power of the elaboration-integration account over alternative hypotheses. Furthermore, we question the implicit assumption that the congruency effect pertains to the truthfulness/sensibility of a subject-predicate proposition, and show that congruency is a function of semantic relatedness between item and context words. PMID:25695759

  2. Delineating the effect of semantic congruency on episodic memory: the role of integration and relatedness.

    PubMed

    Bein, Oded; Livneh, Neta; Reggev, Niv; Gilead, Michael; Goshen-Gottstein, Yonatan; Maril, Anat

    2015-01-01

    A fundamental challenge in the study of learning and memory is to understand the role of existing knowledge in the encoding and retrieval of new episodic information. The importance of prior knowledge in memory is demonstrated in the congruency effect-the robust finding wherein participants display better memory for items that are compatible, rather than incompatible, with their pre-existing semantic knowledge. Despite its robustness, the mechanism underlying this effect is not well understood. In four studies, we provide evidence that demonstrates the privileged explanatory power of the elaboration-integration account over alternative hypotheses. Furthermore, we question the implicit assumption that the congruency effect pertains to the truthfulness/sensibility of a subject-predicate proposition, and show that congruency is a function of semantic relatedness between item and context words.

  3. Event Congruency Enhances Episodic Memory Encoding through Semantic Elaboration and Relational Binding

    PubMed Central

    Staresina, Bernhard P.; Gray, James C.

    2009-01-01

    Behavioral research consistently shows that congruous events, that is, events whose constituent elements match along some specific dimension, are better remembered than incongruous events. Although it has been speculated that this “congruency subsequent memory effect” (cSME) results from enhanced semantic elaboration, empirical evidence for this account is lacking. Here, we report a set of behavioral and neuroimaging experiments demonstrating that congruous events engage regions along the left inferior frontal gyrus (LIFG)—consistently related to semantic elaboration—to a significantly greater degree than incongruous events, providing evidence in favor of this hypothesis. Critically, we additionally report 3 novel findings in relation to event congruency: First, congruous events yield superior memory not only for a given study item but also for associated source details. Second, the cSME is evident not only for events that matched a semantic context but also for those that matched a subjective aesthetic schema. Finally, functional magnetic resonance imaging brain/behavior correlation analysis reveals a strong link between 1) across-subject variation in the magnitude of the cSME and 2) differential right hippocampal activation, suggesting that episodic memory for congruous events is effectively bolstered by the extent to which semantic associations are generated and relationally integrated via LIFG-hippocampal–encoding mechanisms. PMID:18820289

  4. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  5. Effect of hearing loss on semantic access by auditory and audiovisual speech in children.

    PubMed

    Jerger, Susan; Tye-Murray, Nancy; Damian, Markus F; Abdi, Hervé

    2013-01-01

    This research studied whether the mode of input (auditory versus audiovisual) influenced semantic access by speech in children with sensorineural hearing impairment (HI). Participants, 31 children with HI and 62 children with normal hearing (NH), were tested with the authors' new multimodal picture word task. Children were instructed to name pictures displayed on a monitor and ignore auditory or audiovisual speech distractors. The semantic content of the distractors was varied to be related versus unrelated to the pictures (e.g., picture distractor of dog-bear versus dog-cheese, respectively). In children with NH, picture-naming times were slower in the presence of semantically related distractors. This slowing, called semantic interference, is attributed to the meaning-related picture-distractor entries competing for selection and control of the response (the lexical selection by competition hypothesis). Recently, a modification of the lexical selection by competition hypothesis, called the competition threshold (CT) hypothesis, proposed that (1) the competition between the picture-distractor entries is determined by a threshold, and (2) distractors with experimentally reduced fidelity cannot reach the CT. Thus, semantically related distractors with reduced fidelity do not produce the normal interference effect, but instead no effect or semantic facilitation (faster picture naming times for semantically related versus unrelated distractors). Facilitation occurs because the activation level of the semantically related distractor with reduced fidelity (1) is not sufficient to exceed the CT and produce interference but (2) is sufficient to activate its concept, which then strengthens the activation of the picture and facilitates naming. This research investigated whether the proposals of the CT hypothesis generalize to the auditory domain, to the natural degradation of speech due to HI, and to participants who are children. Our multimodal picture word task allowed us

  6. The modulatory effect of semantic familiarity on the audiovisual integration of face-name pairs.

    PubMed

    Li, Yuanqing; Wang, Fangyi; Huang, Biao; Yang, Wanqun; Yu, Tianyou; Talsma, Durk

    2016-12-01

    To recognize individuals, the brain often integrates audiovisual information from familiar or unfamiliar faces, voices, and auditory names. To date, the effects of the semantic familiarity of stimuli on audiovisual integration remain unknown. In this functional magnetic resonance imaging (fMRI) study, we used familiar/unfamiliar facial images, auditory names, and audiovisual face-name pairs as stimuli to determine the influence of semantic familiarity on audiovisual integration. First, we performed a general linear model analysis using fMRI data and found that audiovisual integration occurred for familiar congruent and unfamiliar face-name pairs but not for familiar incongruent pairs. Second, we decoded the familiarity categories of the stimuli (familiar vs. unfamiliar) from the fMRI data and calculated the reproducibility indices of the brain patterns that corresponded to familiar and unfamiliar stimuli. The decoding accuracy rate was significantly higher for familiar congruent versus unfamiliar face-name pairs (83.2%) than for familiar versus unfamiliar faces (63.9%) and for familiar versus unfamiliar names (60.4%). This increase in decoding accuracy was not observed for familiar incongruent versus unfamiliar pairs. Furthermore, compared with the brain patterns associated with facial images or auditory names, the reproducibility index was significantly improved for the brain patterns of familiar congruent face-name pairs but not those of familiar incongruent or unfamiliar pairs. Our results indicate the modulatory effect that semantic familiarity has on audiovisual integration. Specifically, neural representations were enhanced for familiar congruent face-name pairs compared with visual-only faces and auditory-only names, whereas this enhancement effect was not observed for familiar incongruent or unfamiliar pairs. Hum Brain Mapp 37:4333-4348, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. When Hearing the Bark Helps to Identify the Dog: Semantically-Congruent Sounds Modulate the Identification of Masked Pictures

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2010-01-01

    We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the…

  8. Congruence Effect in Semantic Categorization with Masked Primes with Narrow and Broad Categories

    ERIC Educational Resources Information Center

    Quinn, Wendy Maree; Kinoshita, Sachiko

    2008-01-01

    In semantic categorization, masked primes that are category-congruent with the target (e.g., "Planets: mars-VENUS") facilitate responses relative to category-incongruent primes (e.g., "tree-VENUS"). The present study investigated why this category congruence effect is more consistently found with narrow categories (e.g., "Numbers larger/smaller…

  9. Prosodic expectations in silent reading: ERP evidence from rhyme scheme and semantic congruence in classic Chinese poems.

    PubMed

    Chen, Qingrong; Zhang, Jingjing; Xu, Xiaodong; Scheepers, Christoph; Yang, Yiming; Tanenhaus, Michael K

    2016-09-01

    In an ERP study, classic Chinese poems with a well-known rhyme scheme were used to generate an expectation of a rhyme in the absence of an expectation for a specific character. Critical characters were either consistent or inconsistent with the expected rhyme scheme and semantically congruent or incongruent with the content of the poem. These stimuli allowed us to examine whether a top-down rhyme scheme expectation would affect relatively early components of the ERP associated with character-to-sound mapping (P200) and lexically-mediated semantic processing (N400). The ERP data revealed that rhyme scheme congruence, but not semantic congruence modulated the P200: rhyme-incongruent characters elicited a P200 effect across the head demonstrating that top-down expectations influence early phonological coding of the character before lexical-semantic processing. Rhyme scheme incongruence also produced a right-lateralized N400-like effect. Moreover, compared to semantically congruous poems, semantically incongruous poems produced a larger N400 response only when the character was consistent with the expected rhyme scheme. The results suggest that top-down prosodic expectations can modulate early phonological processing in visual word recognition, indicating that prosodic expectations might play an important role in silent reading. They also suggest that semantic processing is influenced by general knowledge of text genre. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. The semantic origin of unconscious priming: Behavioral and event-related potential evidence during category congruency priming from strongly and weakly related masked words.

    PubMed

    Ortells, Juan J; Kiefer, Markus; Castillo, Alejandro; Megías, Montserrat; Morillas, Alejandro

    2016-01-01

    The mechanisms underlying masked congruency priming, semantic mechanisms such as semantic activation or non-semantic mechanisms, for example response activation, remain a matter of debate. In order to decide between these alternatives, reaction times (RTs) and event-related potentials (ERPs) were recorded in the present study, while participants performed a semantic categorization task on visible word targets that were preceded either 167 ms (Experiment 1) or 34 ms before (Experiment 2) by briefly presented (33 ms) novel (unpracticed) masked prime words. The primes and targets belonged to different categories (unrelated), or they were either strongly or weakly semantically related category co-exemplars. Behavioral (RT) and electrophysiological masked congruency priming effects were significantly greater for strongly related pairs than for weakly related pairs, indicating a semantic origin of effects. Priming in the latter condition was not statistically reliable. Furthermore, priming effects modulated the N400 event-related potential (ERP) component, an electrophysiological index of semantic processing, but not ERPs in the time range of the N200 component, associated with response conflict and visuo-motor response priming. The present results demonstrate that masked congruency priming from novel prime words also depends on semantic processing of the primes and is not exclusively driven by non-semantic mechanisms such as response activation. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Electrophysiological evidence for speech-specific audiovisual integration.

    PubMed

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  12. An ERP study on whether semantic integration exists in processing ecologically unrelated audio-visual information.

    PubMed

    Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning

    2011-11-14

    In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Semantic integration of audio-visual information of polyphonic characters in a sentence context: an event-related potential study.

    PubMed

    Liu, Hong; Zhang, Gaoyan; Liu, Baolin

    2017-04-01

    In the Chinese language, a polyphone is a kind of special character that has more than one pronunciation, with each pronunciation corresponding to a different meaning. Here, we aimed to reveal the cognitive processing of audio-visual information integration of polyphones in a sentence context using the event-related potential (ERP) method. Sentences ending with polyphones were presented to subjects simultaneously in both an auditory and a visual modality. Four experimental conditions were set in which the visual presentations were the same, but the pronunciations of the polyphones were: the correct pronunciation; another pronunciation of the polyphone; a semantically appropriate pronunciation but not the pronunciation of the polyphone; or a semantically inappropriate pronunciation but also not the pronunciation of the polyphone. The behavioral results demonstrated significant differences in response accuracies when judging the semantic meanings of the audio-visual sentences, which reflected the different demands on cognitive resources. The ERP results showed that in the early stage, abnormal pronunciations were represented by the amplitude of the P200 component. Interestingly, because the phonological information mediated access to the lexical semantics, the amplitude and latency of the N400 component changed linearly across conditions, which may reflect the gradually increased semantic mismatch in the four conditions when integrating the auditory pronunciation with the visual information. Moreover, the amplitude of the late positive shift (LPS) showed a significant correlation with the behavioral response accuracies, demonstrating that the LPS component reveals the demand of cognitive resources for monitoring and resolving semantic conflicts when integrating the audio-visual information.

  14. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams.

    PubMed

    Su, Yi-Huang

    2014-01-01

    Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  15. Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: The time-courses and categorical specificity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2018-04-30

    We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Semantic congruence enhances memory of episodic associations: role of theta oscillations.

    PubMed

    Atienza, Mercedes; Crespo-Garcia, Maite; Cantero, Jose L

    2011-01-01

    Growing evidence suggests that theta oscillations play a crucial role in episodic encoding. The present study evaluates whether changes in electroencephalographic theta source dynamics mediate the positive influence of semantic congruence on incidental associative learning. Here we show that memory for episodic associations (face-location) is more accurate when studied under semantically congruent contexts. However, only participants showing RT priming effect in a conceptual priming test (priming group) also gave faster responses when recollecting source information of semantically congruent faces as compared with semantically incongruent faces. This improved episodic retrieval was positively correlated with increases in theta power during the study phase mainly in the bilateral parahippocampal gyrus, left superior temporal gyrus, and left lateral posterior parietal lobe. Reconstructed signals from the estimated sources showed higher theta power for congruent than incongruent faces and also for the priming than the nonpriming group. These results are in agreement with the attention to memory model. Besides directing top-down attention to goal-relevant semantic information during encoding, the dorsal parietal lobe may also be involved in redirecting attention to bottom-up-driven memories thanks to connections between the medial-temporal and the left ventral parietal lobe. The latter function can either facilitate or interfere with encoding of face-location associations depending on whether they are preceded by semantically congruent or incongruent contexts, respectively, because only in the former condition retrieved representations related to the cue and the face are both coherent with the person identity and are both associated with the same location.

  17. Between- and within-Ear Congruency and Laterality Effects in an Auditory Semantic/Emotional Prosody Conflict Task

    ERIC Educational Resources Information Center

    Techentin, Cheryl; Voyer, Daniel; Klein, Raymond M.

    2009-01-01

    The present study investigated the influence of within- and between-ear congruency on interference and laterality effects in an auditory semantic/prosodic conflict task. Participants were presented dichotically with words (e.g., mad, sad, glad) pronounced in either congruent or incongruent emotional tones (e.g., angry, happy, or sad) and…

  18. [Ventriloquism and audio-visual integration of voice and face].

    PubMed

    Yokosawa, Kazuhiko; Kanaya, Shoko

    2012-07-01

    Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.

  19. Semantic-based crossmodal processing during visual suppression.

    PubMed

    Cox, Dustin; Hong, Sang Wook

    2015-01-01

    To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.

  20. Quality models for audiovisual streaming

    NASA Astrophysics Data System (ADS)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  1. The time course of auditory-visual processing of speech and body actions: evidence for the simultaneous activation of an extended neural network for semantic processing.

    PubMed

    Meyer, Georg F; Harrison, Neil R; Wuerger, Sophie M

    2013-08-01

    An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions. Copyright © 2013 Elsevier Ltd. All

  2. The level of audiovisual print-speech integration deficits in dyslexia.

    PubMed

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  3. Congruence of happy and sad emotion in music and faces modifies cortical audiovisual activation.

    PubMed

    Jeong, Jeong-Won; Diwadkar, Vaibhav A; Chugani, Carla D; Sinsoongsud, Piti; Muzik, Otto; Behen, Michael E; Chugani, Harry T; Chugani, Diane C

    2011-02-14

    The powerful emotion inducing properties of music are well-known, yet music may convey differing emotional responses depending on environmental factors. We hypothesized that neural mechanisms involved in listening to music may differ when presented together with visual stimuli that conveyed the same emotion as the music when compared to visual stimuli with incongruent emotional content. We designed this study to determine the effect of auditory (happy and sad instrumental music) and visual stimuli (happy and sad faces) congruent or incongruent for emotional content on audiovisual processing using fMRI blood oxygenation level-dependent (BOLD) signal contrast. The experiment was conducted in the context of a conventional block-design experiment. A block consisted of three emotional ON periods, music alone (happy or sad music), face alone (happy or sad faces), and music combined with faces where the music excerpt was played while presenting either congruent emotional faces or incongruent emotional faces. We found activity in the superior temporal gyrus (STG) and fusiform gyrus (FG) to be differentially modulated by music and faces depending on the congruence of emotional content. There was a greater BOLD response in STG when the emotion signaled by the music and faces was congruent. Furthermore, the magnitude of these changes differed for happy congruence and sad congruence, i.e., the activation of STG when happy music was presented with happy faces was greater than the activation seen when sad music was presented with sad faces. In contrast, incongruent stimuli diminished the BOLD response in STG and elicited greater signal change in bilateral FG. Behavioral testing supplemented these findings by showing that subject ratings of emotion in faces were influenced by emotion in music. When presented with happy music, happy faces were rated as more happy (p=0.051) and sad faces were rated as less sad (p=0.030). When presented with sad music, happy faces were rated as less

  4. The role of emotion in dynamic audiovisual integration of faces and voices

    PubMed Central

    Kotz, Sonja A.; Tavano, Alessandro; Schröger, Erich

    2015-01-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. PMID:25147273

  5. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent

  6. The role of emotion in dynamic audiovisual integration of faces and voices.

    PubMed

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Are Automatic Conceptual Cores the Gold Standard of Semantic Processing? The Context-Dependence of Spatial Meaning in Grounded Congruency Effects

    ERIC Educational Resources Information Center

    Lebois, Lauren A. M.; Wilson-Mendenhall, Christine D.; Barsalou, Lawrence W.

    2015-01-01

    According to grounded cognition, words whose semantics contain sensory-motor features activate sensory-motor simulations, which, in turn, interact with spatial responses to produce grounded congruency effects (e.g., processing the spatial feature of "up" for sky should be faster for up vs. down responses). Growing evidence shows these…

  8. The organization and reorganization of audiovisual speech perception in the first year of life.

    PubMed

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  9. The organization and reorganization of audiovisual speech perception in the first year of life

    PubMed Central

    Danielson, D. Kyle; Bruderer, Alison G.; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F.

    2017-01-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone. PMID:28970650

  10. Semantic congruence reverses effects of sleep restriction on associative encoding.

    PubMed

    Alberca-Reina, Esther; Cantero, Jose L; Atienza, Mercedes

    2014-04-01

    Encoding and memory consolidation are influenced by factors such as sleep and congruency of newly learned information with prior knowledge (i.e., schema). However, only a few studies have examined the contribution of sleep to enhancement of schema-dependent memory. Based on previous studies showing that total sleep deprivation specifically impairs hippocampal encoding, and that coherent schemas reduce the hippocampal consolidation period after learning, we predict that sleep loss in the pre-training night will mainly affect schema-unrelated information whereas sleep restriction in the post-training night will have similar effects on schema-related and unrelated information. Here, we tested this hypothesis by presenting participants with face-face associations that could be semantically related or unrelated under different sleep conditions: normal sleep before and after training, and acute sleep restriction either before or after training. Memory was tested one day after training, just after introducing an interference task, and two days later, without any interference. Significant results were evident on the second retesting session. In particular, sleep restriction before training enhanced memory for semantically congruent events in detriment of memory for unrelated events, supporting the specific role of sleep in hippocampal memory encoding. Unexpectedly, sleep restriction after training enhanced memory for both related and unrelated events. Although this finding may suggest a poorer encoding during the interference task, this hypothesis should be specifically tested in future experiments. All together, the present results support a framework in which encoding processes seem to be more vulnerable to sleep loss than consolidation processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Semantic congruence affects hippocampal response to repetition of visual associations.

    PubMed

    McAndrews, Mary Pat; Girard, Todd A; Wilkins, Leanne K; McCormick, Cornelia

    2016-09-01

    Recent research has shown complementary engagement of the hippocampus and medial prefrontal cortex (mPFC) in encoding and retrieving associations based on pre-existing or experimentally-induced schemas, such that the latter supports schema-congruent information whereas the former is more engaged for incongruent or novel associations. Here, we attempted to explore some of the boundary conditions in the relative involvement of those structures in short-term memory for visual associations. The current literature is based primarily on intentional evaluation of schema-target congruence and on study-test paradigms with relatively long delays between learning and retrieval. We used a continuous recognition paradigm to investigate hippocampal and mPFC activation to first and second presentations of scene-object pairs as a function of semantic congruence between the elements (e.g., beach-seashell versus schoolyard-lamp). All items were identical at first and second presentation and the context scene, which was presented 500ms prior to the appearance of the target object, was incidental to the task which required a recognition response to the central target only. Very short lags 2-8 intervening stimuli occurred between presentations. Encoding the targets with congruent contexts was associated with increased activation in visual cortical regions at initial presentation and faster response time at repetition, but we did not find enhanced activation in mPFC relative to incongruent stimuli at either presentation. We did observe enhanced activation in the right anterior hippocampus, as well as regions in visual and lateral temporal and frontal cortical regions, for the repetition of incongruent scene-object pairs. This pattern demonstrates rapid and incidental effects of schema processing in hippocampal, but not mPFC, engagement during continuous recognition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Acquired prior knowledge modulates audiovisual integration.

    PubMed

    Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A

    2010-05-01

    Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.

  13. Semantic Congruence Accelerates the Onset of the Neural Signals of Successful Memory Encoding.

    PubMed

    Packard, Pau A; Rodríguez-Fornells, Antoni; Bunzeck, Nico; Nicolás, Berta; de Diego-Balaguer, Ruth; Fuentemilla, Lluís

    2017-01-11

    As the stream of experience unfolds, our memory system rapidly transforms current inputs into long-lasting meaningful memories. A putative neural mechanism that strongly influences how input elements are transformed into meaningful memory codes relies on the ability to integrate them with existing structures of knowledge or schemas. However, it is not yet clear whether schema-related integration neural mechanisms occur during online encoding. In the current investigation, we examined the encoding-dependent nature of this phenomenon in humans. We showed that actively integrating words with congruent semantic information provided by a category cue enhances memory for words and increases false recall. The memory effect of such active integration with congruent information was robust, even with an interference task occurring right after each encoding word list. In addition, via electroencephalography, we show in 2 separate studies that the onset of the neural signals of successful encoding appeared early (∼400 ms) during the encoding of congruent words. That the neural signals of successful encoding of congruent and incongruent information followed similarly ∼200 ms later suggests that this earlier neural response contributed to memory formation. We propose that the encoding of events that are congruent with readily available contextual semantics can trigger an accelerated onset of the neural mechanisms, supporting the integration of semantic information with the event input. This faster onset would result in a long-lasting and meaningful memory trace for the event but, at the same time, make it difficult to distinguish it from plausible but never encoded events (i.e., related false memories). Conceptual or schema congruence has a strong influence on long-term memory. However, the question of whether schema-related integration neural mechanisms occur during online encoding has yet to be clarified. We investigated the neural mechanisms reflecting how the active

  14. Scene Integration Without Awareness: No Conclusive Evidence for Processing Scene Congruency During Continuous Flash Suppression.

    PubMed

    Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan

    2016-07-01

    A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.

  15. Affective Congruence between Sound and Meaning of Words Facilitates Semantic Decision.

    PubMed

    Aryani, Arash; Jacobs, Arthur M

    2018-05-31

    A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to "ordinary" words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.

  16. Neural Correlates of Audiovisual Integration of Semantic Category Information

    ERIC Educational Resources Information Center

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  17. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-02-17

    Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects

  18. Perceived Odor-Taste Congruence Influences Intensity and Pleasantness Differently.

    PubMed

    Amsellem, Sherlley; Ohla, Kathrin

    2016-10-01

    The role of congruence in cross-modal interactions has received little attention. In most experiments involving cross-modal pairs, congruence is conceived of as a binary process according to which cross-modal pairs are categorized as perceptually and/or semantically matching or mismatching. The present study investigated whether odor-taste congruence can be perceived gradually and whether congruence impacts other facets of subjective experience, that is, intensity, pleasantness, and familiarity. To address these questions, we presented food odorants (chicken, orange, and 3 mixtures of the 2) and tastants (savory-salty and sour-sweet) in pairs varying in congruence. Participants were to report the perceived congruence of the pairs along with intensity, pleasantness, and familiarity. We found that participants could perceive distinct congruence levels, thereby favoring a multilevel account of congruence perception. In addition, familiarity and pleasantness followed the same pattern as the congruence while intensity was highest for the most congruent and the most incongruent pairs whereas intensities of the intermediary-congruent pairs were reduced. Principal component analysis revealed that pleasantness and familiarity form one dimension of the phenomenological experience of odor-taste pairs that was orthogonal to intensity. The results bear implications for the understanding the behavioral underpinnings of perseverance of habitual food choices. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Contribution of prior semantic knowledge to new episodic learning in amnesia.

    PubMed

    Kan, Irene P; Alexander, Michael P; Verfaellie, Mieke

    2009-05-01

    We evaluated whether prior semantic knowledge would enhance episodic learning in amnesia. Subjects studied prices that are either congruent or incongruent with prior price knowledge for grocery and household items and then performed a forced-choice recognition test for the studied prices. Consistent with a previous report, healthy controls' performance was enhanced by price knowledge congruency; however, only a subset of amnesic patients experienced the same benefit. Whereas patients with relatively intact semantic systems, as measured by an anatomical measure (i.e., lesion involvement of anterior and lateral temporal lobes), experienced a significant congruency benefit, patients with compromised semantic systems did not experience a congruency benefit. Our findings suggest that when prior knowledge structures are intact, they can support acquisition of new episodic information by providing frameworks into which such information can be incorporated.

  20. Unconscious semantic activation depends on feature-specific attention allocation.

    PubMed

    Spruyt, Adriaan; De Houwer, Jan; Everaert, Tom; Hermans, Dirk

    2012-01-01

    We examined whether semantic activation by subliminally presented stimuli is dependent upon the extent to which participants assign attention to specific semantic stimulus features and stimulus dimensions. Participants pronounced visible target words that were preceded by briefly presented, masked prime words. Both affective and non-affective semantic congruence of the prime-target pairs were manipulated under conditions that either promoted selective attention for affective stimulus information or selective attention for non-affective semantic stimulus information. In line with our predictions, results showed that affective congruence had a clear impact on word pronunciation latencies only if participants were encouraged to assign attention to the affective stimulus dimension. In contrast, non-affective semantic relatedness of the prime-target pairs produced no priming at all. Our findings are consistent with the hypothesis that unconscious activation of (affective) semantic information is modulated by feature-specific attention allocation. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Interactions between auditory and visual semantic stimulus classes: evidence for common processing networks for speech and body actions.

    PubMed

    Meyer, Georg F; Greenlee, Mark; Wuerger, Sophie

    2011-09-01

    Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions.

  2. Enemies and friends in the neighborhood: orthographic similarity effects in semantic categorization.

    PubMed

    Pecher, Diane; Zeelenberg, René; Wagenmakers, Eric-Jan

    2005-01-01

    Studies investigating orthographic similarity effects in semantic tasks have produced inconsistent results. The authors investigated orthographic similarity effects in animacy decision and in contrast with previous studies, they took semantic congruency into account. In Experiments 1 and 2, performance to a target (cat) was better if a previously studied neighbor (rat) was congruent (i.e., belonged to the same animate-inanimate category) than it was if it was incongruent (e.g., mat). In Experiments 3 and 4, performance was better for targets with more preexisting congruent neighbors than for targets with more preexisting incongruent neighbors. These results demonstrate that orthographic similarity effects in semantic categorization are conditional on semantic congruency. This strongly suggests that semantic information becomes available before orthographic processing has been completed. 2005 APA

  3. The contribution of perceptual factors and training on varying audiovisual integration capacity.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2018-06-01

    The suggestion that the capacity of audiovisual integration has an upper limit of 1 was challenged in 4 experiments using perceptual factors and training to enhance the binding of auditory and visual information. Participants were required to note a number of specific visual dot locations that changed in polarity when a critical auditory stimulus was presented, under relatively fast (200-ms stimulus onset asynchrony [SOA]) and slow (700-ms SOA) rates of presentation. In Experiment 1, transient cross-modal congruency between the brightness of polarity change and pitch of the auditory tone was manipulated. In Experiment 2, sustained chunking was enabled on certain trials by connecting varying dot locations with vertices. In Experiment 3, training was employed to determine if capacity would increase through repeated experience with an intermediate presentation rate (450 ms). Estimates of audiovisual integration capacity (K) were larger than 1 during cross-modal congruency at slow presentation rates (Experiment 1), during perceptual chunking at slow and fast presentation rates (Experiment 2), and, during an intermediate presentation rate posttraining (Experiment 3). Finally, Experiment 4 showed a linear increase in K using SOAs ranging from 100 to 600 ms, suggestive of quantitative rather than qualitative changes in the mechanisms in audiovisual integration as a function of presentation rate. The data compromise the suggestion that the capacity of audiovisual integration is limited to 1 and suggest that the ability to bind sounds to sights is contingent on individual and environmental factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  5. Papers in Semantics. Working Papers in Linguistics No. 49.

    ERIC Educational Resources Information Center

    Yoon, Jae-Hak, Ed.; Kathol, Andreas, Ed.

    1996-01-01

    Papers on semantic theory and research include: "Presupposition, Congruence, and Adverbs of Quantification" (Mike Calcagno); "A Unified Account of '(Ta)myen'-Conditionals in Korean" (Chan Chung); "Spanish 'imperfecto' and 'preterito': Truth Conditions and Aktionsart Effects in a Situation Semantics" (Alicia Cipria,…

  6. Gender affects semantic competition: the effect of gender in a non-gender-marking language.

    PubMed

    Fukumura, Kumiko; Hyönä, Jukka; Scholfield, Merete

    2013-07-01

    English speakers tend to produce fewer pronouns when a referential competitor has the same gender as the referent than otherwise. Traditionally, this gender congruence effect has been explained in terms of ambiguity avoidance (e.g., Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000; Fukumura, Van Gompel, & Pickering, 2010). However, an alternative hypothesis is that the competitor's gender congruence affects semantic competition, making the referent less accessible relative to when the competitor has a different gender (Arnold & Griffin, 2007). Experiment 1 found that even in Finnish, which is a nongendered language, the competitor's gender congruence results in fewer pronouns, supporting the semantic competition account. In Experiment 2, Finnish native speakers took part in an English version of the same experiment. The effect of gender congruence was larger in Experiment 2 than in Experiment 1, suggesting that the presence of a same-gender competitor resulted in a larger reduction in pronoun use in English than in Finnish. In contrast, other nonlinguistic similarity had similar effects in both experiments. This indicates that the effect of gender congruence in English is not entirely driven by semantic competition: Speakers also avoid gender-ambiguous pronouns. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  8. Neural correlates of audiovisual speech processing in a second language.

    PubMed

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode.

  10. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.

  11. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  12. Auditory conflict and congruence in frontotemporal dementia.

    PubMed

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    PubMed Central

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  14. Evaluative priming in a semantic flanker task: ERP evidence for a mutual facilitation explanation.

    PubMed

    Schmitz, Melanie; Wentura, Dirk; Brinkmann, Thorsten A

    2014-03-01

    In semantic flanker tasks, target categorization response times are affected by the semantic compatibility of the flanker and target. With positive and negative category exemplars, we investigated the influence of evaluative congruency (whether flanker and target share evaluative valence) on the flanker effect, using behavioral and electrophysiological measures. We hypothesized a moderation of the flanker effect by evaluative congruency on the basis of the assumption that evaluatively congruent concepts mutually facilitate each other's activation (see Schmitz & Wentura in Journal of Experimental Psychology: Learning, Memory, and Cognition 38:984-1000, 2012). Applying an onset delay of 50 ms for the flanker, we aimed to decrease the facilitative effect of an evaluatively congruent flanker on target encoding and, at the same time, increase the facilitative effect of an evaluatively congruent target on flanker encoding. As a consequence of increased flanker activation in the case of evaluative congruency, we expected a semantically incompatible flanker to interfere with the target categorization to a larger extent (as compared with an evaluatively incongruent pairing). Confirming our hypotheses, the flanker effect significantly depended on evaluative congruency, in both mean response times and N2 mean amplitudes. Thus, the present study provided behavioral and electrophysiological evidence for the mutual facilitation of evaluatively congruent concepts. Implications for the representation of evaluative connotations of semantic concepts are discussed.

  15. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm.

    PubMed

    Misselhorn, Jonas; Daume, Jonathan; Engel, Andreas K; Friese, Uwe

    2016-07-29

    A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Higher Language Ability is Related to Angular Gyrus Activation Increase During Semantic Processing, Independent of Sentence Incongruency.

    PubMed

    Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria

    2016-01-01

    This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task-which tapped language comprehension and inference, and modulated sentence congruency-employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation.

  17. MPEG-7-based description infrastructure for an audiovisual content analysis and retrieval system

    NASA Astrophysics Data System (ADS)

    Bailer, Werner; Schallauer, Peter; Hausenblas, Michael; Thallinger, Georg

    2005-01-01

    We present a case study of establishing a description infrastructure for an audiovisual content-analysis and retrieval system. The description infrastructure consists of an internal metadata model and access tool for using it. Based on an analysis of requirements, we have selected, out of a set of candidates, MPEG-7 as the basis of our metadata model. The openness and generality of MPEG-7 allow using it in broad range of applications, but increase complexity and hinder interoperability. Profiling has been proposed as a solution, with the focus on selecting and constraining description tools. Semantic constraints are currently only described in textual form. Conformance in terms of semantics can thus not be evaluated automatically and mappings between different profiles can only be defined manually. As a solution, we propose an approach to formalize the semantic constraints of an MPEG-7 profile using a formal vocabulary expressed in OWL, which allows automated processing of semantic constraints. We have defined the Detailed Audiovisual Profile as the profile to be used in our metadata model and we show how some of the semantic constraints of this profile can be formulated using ontologies. To work practically with the metadata model, we have implemented a MPEG-7 library and a client/server document access infrastructure.

  18. Opposite ERP effects for conscious and unconscious semantic processing under continuous flash suppression.

    PubMed

    Yang, Yung-Hao; Zhou, Jifan; Li, Kuei-An; Hung, Tifan; Pegna, Alan J; Yeh, Su-Ling

    2017-09-01

    We examined whether semantic processing occurs without awareness using continuous flash suppression (CFS). In two priming tasks, participants were required to judge whether a target was a word or a non-word, and to report whether the masked prime was visible. Experiment 1 manipulated the lexical congruency between the prime-target pairs and Experiment 2 manipulated their semantic relatedness. Despite the absence of behavioral priming effects (Experiment 1), the ERP results revealed that an N4 component was sensitive to the prime-target lexical congruency (Experiment 1) and semantic relatedness (Experiment 2) when the prime was rendered invisible under CFS. However, these results were reversed with respect to those that emerged when the stimuli were perceived consciously. Our findings suggest that some form of lexical and semantic processing can occur during CFS-induced unawareness, but are associated with different electrophysiological outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Enemies and Friends in the Neighborhood: Orthographic Similarity Effects in Semantic Categorization

    ERIC Educational Resources Information Center

    Pecher, Diane; Zeelenberg, Rene; Wagenmakers, Eric-Jan

    2005-01-01

    Studies investigating orthographic similarity effects in semantic tasks have produced inconsistent results. The authors investigated orthographic similarity effects in animacy decision and in contrast with previous studies, they took semantic congruency into account. In Experiments 1 and 2, performance to a target (cat) was better if a previously…

  20. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  1. Semantic Facilitation in Category and Action Naming: Testing the Message-Congruency Account

    ERIC Educational Resources Information Center

    Kuipers, Jan-Rouke; La Heij, Wido

    2008-01-01

    Basic-level picture naming is hampered by the presence of a semantically related context word (compared to an unrelated word), whereas picture categorization is facilitated by a semantically related context word. This reversal of the semantic context effect has been explained by assuming that in categorization tasks, basic-level distractor words…

  2. Parameters of semantic multisensory integration depend on timing and modality order among people on the autism spectrum: evidence from event-related potentials.

    PubMed

    Russo, N; Mottron, L; Burack, J A; Jemel, B

    2012-07-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Higher Language Ability is Related to Angular Gyrus Activation Increase During Semantic Processing, Independent of Sentence Incongruency

    PubMed Central

    Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria

    2016-01-01

    This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task—which tapped language comprehension and inference, and modulated sentence congruency—employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation. PMID

  4. Congruence Reconsidered.

    ERIC Educational Resources Information Center

    Tudor, Keith; Worrall, Mike

    1994-01-01

    Discusses Carl Rogers' definitions of congruence, and identifies four specific requirements for the concept and practice of therapeutic congruence. Examines the interface between congruence and the other necessary and sufficient conditions of change, drawing on examples from practice. (JPS)

  5. Semantic transcoding of video based on regions of interest

    NASA Astrophysics Data System (ADS)

    Lim, Jeongyeon; Kim, Munchurl; Kim, Jong-Nam; Kim, Kyeongsoo

    2003-06-01

    Traditional transcoding on multimedia has been performed from the perspectives of user terminal capabilities such as display sizes and decoding processing power, and network resources such as available network bandwidth and quality of services (QoS) etc. The adaptation (or transcoding) of multimedia contents to given such constraints has been made by frame dropping and resizing of audiovisual, as well as reduction of SNR (Signal-to-Noise Ratio) values by saving the resulting bitrates. Not only such traditional transcoding is performed from the perspective of user"s environment, but also we incorporate a method of semantic transcoding of audiovisual based on region of interest (ROI) from user"s perspective. Users can designate their interested parts in images or video so that the corresponding video contents can be adapted focused on the user"s ROI. We incorporate the MPEG-21 DIA (Digital Item Adaptation) framework in which such semantic information of the user"s ROI is represented and delivered to the content provider side as XDI (context digital item). Representation schema of our semantic information of the user"s ROI has been adopted in MPEG-21 DIA Adaptation Model. In this paper, we present the usage of semantic information of user"s ROI for transcoding and show our system implementation with experimental results.

  6. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    PubMed

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (<30 min) letter-speech sound training initializes audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016

  7. Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: an ERP study.

    PubMed

    Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang

    2011-07-01

    In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Interplay Between the Object and Its Symbol: The Size-Congruency Effect

    PubMed Central

    Shen, Manqiong; Xie, Jiushu; Liu, Wenjuan; Lin, Wenjie; Chen, Zhuoming; Marmolejo-Ramos, Fernando; Wang, Ruiming

    2016-01-01

    Grounded cognition suggests that conceptual processing shares cognitive resources with perceptual processing. Hence, conceptual processing should be affected by perceptual processing, and vice versa. The current study explored the relationship between conceptual and perceptual processing of size. Within a pair of words, we manipulated the font size of each word, which was either congruent or incongruent with the actual size of the referred object. In Experiment 1a, participants compared object sizes that were referred to by word pairs. Higher accuracy was observed in the congruent condition (e.g., word pairs referring to larger objects in larger font sizes) than in the incongruent condition. This is known as the size-congruency effect. In Experiments 1b and 2, participants compared the font sizes of these word pairs. The size-congruency effect was not observed. In Experiments 3a and 3b, participants compared object and font sizes of word pairs depending on a task cue. Results showed that perceptual processing affected conceptual processing, and vice versa. This suggested that the association between conceptual and perceptual processes may be bidirectional but further modulated by semantic processing. Specifically, conceptual processing might only affect perceptual processing when semantic information is activated. The current study PMID:27512529

  9. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  10. An investigation of the time course of category congruence and priming distance effects in number classification tasks.

    PubMed

    Perry, Jason R; Lupker, Stephen J

    2012-09-01

    The issue investigated in the present research is the nature of the information that is responsible for producing masked priming effects (e.g., semantic information or stimulus-response [S-R] associations) when responding to number stimuli. This issue was addressed by assessing both the magnitude of the category congruence (priming) effect and the nature of the priming distance effect across trials using single-digit primes and targets. Participants made either magnitude (i.e., whether the number presented was larger or smaller than 5) or identification (i.e., press the left button if the number was either a 1, 2, 3, or 4 or the right button if the number was either a 6, 7, 8, or 9) judgments. The results indicated that, regardless of task instruction, there was a clear priming distance effect and a significantly increasing category congruence effect. These results indicated that both semantic activation and S-R associations play important roles in producing masked priming effects.

  11. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  12. Automatic summarization of soccer highlights using audio-visual descriptors.

    PubMed

    Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc

    2015-01-01

    Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.

  13. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    PubMed

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-06-30

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC.

  14. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    PubMed Central

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  15. Semantically Transparent and Opaque Compounds in German Noun-Phrase Production: Evidence for Morphemes in Speaking.

    PubMed

    Lorenz, Antje; Zwitserlood, Pienie

    2016-01-01

    This study examines the lexical representation and processing of noun-noun compounds and their grammatical gender during speech production in German, a language that codes for grammatical gender (masculine, feminine, and neuter). Using a picture-word interference paradigm, participants produced determiner-compound noun phrases in response to pictures, while ignoring written distractor words. Compound targets were either semantically transparent (e.g., birdhouse) or opaque (e.g., hotdog), and their constituent nouns either had the same or a different gender (internal gender match). Effects of gender-congruent but otherwise unrelated distractor nouns, and of two morphologically related distractors corresponding to the first or second constituent were assessed relative to a completely unrelated, gender-incongruent distractor baseline. Both constituent distractors strongly facilitated compound naming, and these effects were independent of the targets' semantic transparency. This supports retrieval of constituent morphemes for semantically transparent and opaque compounds during speech production. Furthermore, gender congruency between compounds and distractors did not speed up naming in general, but interacted with gender match of the compounds' constituent nouns, and their semantic transparency. A significant gender-congruency effect was obtained with semantically transparent compounds, consisting of two constituent nouns of the same gender, only. In principle, this pattern is compatible with a multiple lemma representation account for semantically transparent, but not for opaque compounds. The data also fit with a more parsimonious, holistic representation for all compounds at the lemma level, when differences in co-activation patterns for semantically transparent and opaque compounds are considered.

  16. Individual differences in automatic semantic priming.

    PubMed

    Andrews, Sally; Lo, Steson; Xia, Violet

    2017-05-01

    This research investigated whether masked semantic priming in a semantic categorization task that required classification of words as animals or nonanimals was modulated by individual differences in lexical proficiency. A sample of 89 skilled readers, assessed on reading comprehension, vocabulary and spelling ability, classified target words preceded by brief (50 ms) masked primes that were either congruent or incongruent with the category of the target. Congruent primes were also selected to be either high (e.g., hawk EAGLE, pistol RIFLE) or low (e.g., mole EAGLE, boots RIFLE) in semantic feature overlap with the target. "Overall proficiency," indexed by high performance on both a "semantic composite" measure of reading comprehension and vocabulary and a "spelling composite," was associated with stronger congruence priming from both high and low feature overlap primes for animal exemplars, but only predicted priming from low overlap primes for nonexemplars. Classification of high frequency nonexemplars was also significantly modulated by an independent "spelling-meaning" factor, indexed by the discrepancy between the semantic and spelling composites, because relatively higher scores on the semantic than the spelling composite were associated with stronger semantic priming. These findings show that higher lexical proficiency is associated with stronger evidence of automatic semantic priming and suggest that individual differences in lexical quality modulate the division of labor between orthographic and semantic processing in early lexical retrieval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Neurofunctional Underpinnings of Audiovisual Emotion Processing in Teens with Autism Spectrum Disorders

    PubMed Central

    Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139

  18. Congruence of Meaning.

    ERIC Educational Resources Information Center

    Suppes, Patrick

    By looking at the history of geometry and the concept of congruence in geometry we can get a new perspective on how to think about the closeness in meaning of two sentences. As in the analysis of congruence in geometry, a definite and concrete set of proposals about congruence of meaning depends essentially on the kind of theoretical framework…

  19. Dissociating verbal and nonverbal audiovisual object processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  20. Modelling audiovisual integration of affect from videos and music.

    PubMed

    Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V

    2018-05-01

    Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.

  1. Effect of Perceptual Load on Semantic Access by Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Herve

    2013-01-01

    Purpose: To examine whether semantic access by speech requires attention in children. Method: Children ("N" = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual- dynamic face) picture word task. The cross-modal task had a low load,…

  2. Syntactic processing in the absence of awareness and semantics.

    PubMed

    Hung, Shao-Min; Hsieh, Po-Jang

    2015-10-01

    The classical view that multistep rule-based operations require consciousness has recently been challenged by findings that both multiword semantic processing and multistep arithmetic equations can be processed unconsciously. It remains unclear, however, whether pure rule-based cognitive processes can occur unconsciously in the absence of semantics. Here, after presenting 2 words consciously, we suppressed the third with continuous flash suppression. First, we showed that the third word in the subject-verb-verb format (syntactically incongruent) broke suppression significantly faster than the third word in the subject-verb-object format (syntactically congruent). Crucially, the same effect was observed even with sentences composed of pseudowords (pseudo subject-verb-adjective vs. pseudo subject-verb-object) without any semantic information. This is the first study to show that syntactic congruency can be processed unconsciously in the complete absence of semantics. Our findings illustrate how abstract rule-based processing (e.g., syntactic categories) can occur in the absence of visual awareness, even when deprived of semantics. (c) 2015 APA, all rights reserved).

  3. Somatotopic Semantic Priming and Prediction in the Motor System

    PubMed Central

    Grisoni, Luigi; Dreyer, Felix R.; Pulvermüller, Friedemann

    2016-01-01

    The recognition of action-related sounds and words activates motor regions, reflecting the semantic grounding of these symbols in action information; in addition, motor cortex exerts causal influences on sound perception and language comprehension. However, proponents of classic symbolic theories still dispute the role of modality-preferential systems such as the motor cortex in the semantic processing of meaningful stimuli. To clarify whether the motor system carries semantic processes, we investigated neurophysiological indexes of semantic relationships between action-related sounds and words. Event-related potentials revealed that action-related words produced significantly larger stimulus-evoked (Mismatch Negativity-like) and predictive brain responses (Readiness Potentials) when presented in body-part-incongruent sound contexts (e.g., “kiss” in footstep sound context; “kick” in whistle context) than in body-part-congruent contexts, a pattern reminiscent of neurophysiological correlates of semantic priming. Cortical generators of the semantic relatedness effect were localized in areas traditionally associated with semantic memory, including left inferior frontal cortex and temporal pole, and, crucially, in motor areas, where body-part congruency of action sound–word relationships was indexed by a somatotopic pattern of activation. As our results show neurophysiological manifestations of action-semantic priming in the motor cortex, they prove semantic processing in the motor system and thus in a modality-preferential system of the human brain. PMID:26908635

  4. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.

  5. An ALE meta-analysis on the audiovisual integration of speech signals.

    PubMed

    Erickson, Laura C; Heeg, Elizabeth; Rauschecker, Josef P; Turkeltaub, Peter E

    2014-11-01

    The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining "conflicting" versus "validating" AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals. Copyright © 2014 Wiley Periodicals, Inc.

  6. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  7. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval

    PubMed Central

    Bonnici, Heidi M.; Richter, Franziska R.; Yazar, Yasemin

    2016-01-01

    Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. SIGNIFICANCE STATEMENT Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (An

  8. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval.

    PubMed

    Bonnici, Heidi M; Richter, Franziska R; Yazar, Yasemin; Simons, Jon S

    2016-05-18

    Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (AnG) contribute to the retrieval of

  9. The relation between body semantics and spatial body representations.

    PubMed

    van Elk, Michiel; Blanke, Olaf

    2011-11-01

    The present study addressed the relation between body semantics (i.e. semantic knowledge about the human body) and spatial body representations, by presenting participants with word pairs, one below the other, referring to body parts. The spatial position of the word pairs could be congruent (e.g. EYE / MOUTH) or incongruent (MOUTH / EYE) with respect to the spatial position of the words' referents. In addition, the spatial distance between the words' referents was varied, resulting in word pairs referring to body parts that are close (e.g. EYE / MOUTH) or far in space (e.g. EYE / FOOT). A spatial congruency effect was observed when subjects made an iconicity judgment (Experiments 2 and 3) but not when making a semantic relatedness judgment (Experiment 1). In addition, when making a semantic relatedness judgment (Experiment 1) reaction times increased with increased distance between the body parts but when making an iconicity judgment (Experiments 2 and 3) reaction times decreased with increased distance. These findings suggest that the processing of body-semantics results in the activation of a detailed visuo-spatial body representation that is modulated by the specific task requirements. We discuss these new data with respect to theories of embodied cognition and body semantics. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Similarity and Congruence.

    ERIC Educational Resources Information Center

    Herman, Daniel L.

    This instructional unit is an introduction to the common properties of similarity and congruence. Manipulation of objects leads to a recognition of these properties. The ASA, SAS, and SSS theorems are not mentioned. Limited use is made in the application of the properties of size and shape preserved by similarity or congruence. A teacher's guide…

  11. Evaluation of differences in quality of experience features for test stimuli of good-only and bad-only overall audiovisual quality

    NASA Astrophysics Data System (ADS)

    Strohmeier, Dominik; Kunze, Kristina; Göbel, Klemens; Liebetrau, Judith

    2013-01-01

    Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing QoE differences.

  12. Speaker information affects false recognition of unstudied lexical-semantic associates.

    PubMed

    Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E

    2018-05-01

    Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.

  13. Multiplex congruence network of natural numbers.

    PubMed

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-31

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  14. Multiplex congruence network of natural numbers

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-01

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  15. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Ways of making-sense: Local gamma synchronization reveals differences between semantic processing induced by music and language.

    PubMed

    Barraza, Paulo; Chavez, Mario; Rodríguez, Eugenio

    2016-01-01

    Similar to linguistic stimuli, music can also prime the meaning of a subsequent word. However, it is so far unknown what is the brain dynamics underlying the semantic priming effect induced by music, and its relation to language. To elucidate these issues, we compare the brain oscillatory response to visual words that have been semantically primed either by a musical excerpt or by an auditory sentence. We found that semantic violation between music-word pairs triggers a classical ERP N400, and induces a sustained increase of long-distance theta phase synchrony, along with a transient increase of local gamma activity. Similar results were observed after linguistic semantic violation except for gamma activity, which increased after semantic congruence between sentence-word pairs. Our findings indicate that local gamma activity is a neural marker that signals different ways of semantic processing between music and language, revealing the dynamic and self-organized nature of the semantic processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Vague Congruences and Quotient Lattice Implication Algebras

    PubMed Central

    Qin, Xiaoyan; Xu, Yang

    2014-01-01

    The aim of this paper is to further develop the congruence theory on lattice implication algebras. Firstly, we introduce the notions of vague similarity relations based on vague relations and vague congruence relations. Secondly, the equivalent characterizations of vague congruence relations are investigated. Thirdly, the relation between the set of vague filters and the set of vague congruences is studied. Finally, we construct a new lattice implication algebra induced by a vague congruence, and the homomorphism theorem is given. PMID:25133207

  18. Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech.

    PubMed

    Ben-David, Boaz M; Multani, Namita; Shakuf, Vered; Rudzicz, Frank; van Lieshout, Pascal H H M

    2016-02-01

    Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics). We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech. Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.

  19. The N400 reveals how personal semantics is processed: Insights into the nature and organization of self-knowledge

    PubMed Central

    Federmeier, Kara D.

    2017-01-01

    There is growing recognition that some important forms of long-term memory are difficult to classify into one of the well-studied memory subtypes. One example is personal semantics. Like the episodes that are stored as part of one’s autobiography, personal semantics is linked to an individual, yet, like general semantic memory, it is detached from a specific encoding context. Access to general semantics elicits an electrophysiological response known as the N400, which has been characterized across three decades of research; surprisingly, this response has not been fully examined in the context of personal semantics. In this study, we assessed responses to congruent and incongruent statements about people’s own, personal preferences. We found that access to personal preferences elicited N400 responses, with congruency effects that were similar in latency and distribution to those for general semantic statements elicited from the same participants. These results suggest that the processing of personal and general semantics share important functional and neurobiological features. PMID:26825011

  20. Semantic Size and Contextual Congruency Effects during Reading: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Wei, Wei; Cook, Anne E.

    2016-01-01

    Recent lexical decision studies have produced conflicting evidence about whether an object's semantic size influences word recognition. The present study examined this variable in online reading. Target words representing small and large objects were embedded in sentence contexts that were either neutral, congruent, or incongruent with respect to…

  1. Effect of perceptual load on semantic access by speech in children

    PubMed Central

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervè

    2013-01-01

    Purpose To examine whether semantic access by speech requires attention in children. Method Children (N=200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multi-modal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal had a low load, and the multi-modal had a high load [i.e., respectively naming pictures displayed 1) on a blank screen vs 2) below the talker’s face on his T-shirt]. Semantic content of distractors was manipulated to be related vs unrelated to picture (e.g., picture dog with distractors bear vs cheese). Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity limited attentional resources if irrelevant semantic-content manipulation influences naming times on both tasks despite variations in loads but dependent on attentional resources exhausted by higher load task if irrelevant content influences naming only on cross-modal (low load). Results Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds, but only on cross-modal in 4–5-year-olds. The addition of visual speech did not influence results on the multi-modal task. Conclusion Younger and older children differ in dependence on attentional resources for semantic access by speech. PMID:22896045

  2. Effect of perceptual load on semantic access by speech in children.

    PubMed

    Jerger, Susan; Damian, Markus F; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervé

    2013-04-01

    To examine whether semantic access by speech requires attention in children. Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. Younger and older children differ in dependence on attentional resources for semantic access by speech.

  3. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal

  4. Procrustes Matching by Congruence Coefficients

    ERIC Educational Resources Information Center

    Korth, Bruce; Tucker, L. R.

    1976-01-01

    Matching by Procrustes methods involves the transformation of one matrix to match with another. A special least squares criterion, the congruence coefficient, has advantages as a criterion for some factor analytic interpretations. A Procrustes method maximizing the congruence coefficient is given. (Author/JKS)

  5. Soft Congruence Relations over Rings

    PubMed Central

    Xin, Xiaolong; Li, Wenting

    2014-01-01

    Molodtsov introduced the concept of soft sets, which can be seen as a new mathematical tool for dealing with uncertainty. In this paper, we initiate the study of soft congruence relations by using the soft set theory. The notions of soft quotient rings, generalized soft ideals and generalized soft quotient rings, are introduced, and several related properties are investigated. Also, we obtain a one-to-one correspondence between soft congruence relations and idealistic soft rings and a one-to-one correspondence between soft congruence relations and soft ideals. In particular, the first, second, and third soft isomorphism theorems are established, respectively. PMID:24949493

  6. The N400 reveals how personal semantics is processed: Insights into the nature and organization of self-knowledge.

    PubMed

    Coronel, Jason C; Federmeier, Kara D

    2016-04-01

    There is growing recognition that some important forms of long-term memory are difficult to classify into one of the well-studied memory subtypes. One example is personal semantics. Like the episodes that are stored as part of one's autobiography, personal semantics is linked to an individual, yet, like general semantic memory, it is detached from a specific encoding context. Access to general semantics elicits an electrophysiological response known as the N400, which has been characterized across three decades of research; surprisingly, this response has not been fully examined in the context of personal semantics. In this study, we assessed responses to congruent and incongruent statements about people's own, personal preferences. We found that access to personal preferences elicited N400 responses, with congruency effects that were similar in latency and distribution to those for general semantic statements elicited from the same participants. These results suggest that the processing of personal and general semantics share important functional and neurobiological features. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Extraction of composite visual objects from audiovisual materials

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal

    1999-08-01

    An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.

  8. Extracting semantics from audio-visual content: the final frontier in multimedia retrieval.

    PubMed

    Naphade, M R; Huang, T S

    2002-01-01

    Multimedia understanding is a fast emerging interdisciplinary research area. There is tremendous potential for effective use of multimedia content through intelligent analysis. Diverse application areas are increasingly relying on multimedia understanding systems. Advances in multimedia understanding are related directly to advances in signal processing, computer vision, pattern recognition, multimedia databases, and smart sensors. We review the state-of-the-art techniques in multimedia retrieval. In particular, we discuss how multimedia retrieval can be viewed as a pattern recognition problem. We discuss how reliance on powerful pattern recognition and machine learning techniques is increasing in the field of multimedia retrieval. We review the state-of-the-art multimedia understanding systems with particular emphasis on a system for semantic video indexing centered around multijects and multinets. We discuss how semantic retrieval is centered around concepts and context and the various mechanisms for modeling concepts and context.

  9. Semantically induced distortions of visual awareness in a patient with Balint's syndrome.

    PubMed

    Soto, David; Humphreys, Glyn W

    2009-02-01

    We present data indicating that visual awareness for a basic perceptual feature (colour) can be influenced by the relation between the feature and the semantic properties of the stimulus. We examined semantic interference from the meaning of a colour word (''RED") on simple colour (ink related) detection responses in a patient with simultagnosia due to bilateral parietal lesions. We found that colour detection was influenced by the congruency between the meaning of the word and the relevant ink colour, with impaired performance when the word and the colour mismatched (on incongruent trials). This result held even when remote associations between meaning and colour were used (i.e. the word ''PEA" influenced detection of the ink colour red). The results are consistent with a late locus of conscious visual experience that is derived at post-semantic levels. The implications for the understanding of the role of parietal cortex in object binding and visual awareness are discussed.

  10. A sound and efficient measure of joint congruence.

    PubMed

    Conconi, Michele; Castelli, Vincenzo Parenti

    2014-09-01

    In the medical world, the term "congruence" is used to describe by visual inspection how the articular surfaces mate each other, evaluating the joint capability to distribute an applied load from a purely geometrical perspective. Congruence is commonly employed for assessing articular physiology and for the comparison between normal and pathological states. A measure of it would thus represent a valuable clinical tool. Several approaches for the quantification of joint congruence have been proposed in the biomechanical literature, differing on how the articular contact is modeled. This makes it difficult to compare different measures. In particular, in previous articles a congruence measure has been presented which proved to be efficient and suitable for the clinical practice, but it was still empirically defined. This article aims at providing a sound theoretical support to this congruence measure by means of the Winkler elastic foundation contact model which, with respect to others, has the advantage to hold also for highly conforming surfaces as most of the human articulations are. First, the geometrical relation between the applied load and the resulting peak of pressure is analytically derived from the elastic foundation contact model, providing a theoretically sound approach to the definition of a congruence measure. Then, the capability of congruence measure to capture the same geometrical relation is shown. Finally, the reliability of congruence measure is discussed. © IMechE 2014.

  11. Congruence Couple Therapy for Pathological Gambling

    ERIC Educational Resources Information Center

    Lee, Bonnie K.

    2009-01-01

    Couple therapy models for pathological gambling are limited. Congruence Couple Therapy is an integrative, humanistic, systems model that addresses intrapsychic, interpersonal, intergenerational, and universal-spiritual disconnections of pathological gamblers and their spouses to shift towards congruence. Specifically, CCT's theoretical…

  12. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  13. How prior expectations shape multisensory perception.

    PubMed

    Gau, Remi; Noppeney, Uta

    2016-01-01

    The brain generates a representation of our environment by integrating signals from a common source, but segregating signals from different sources. This fMRI study investigated how the brain arbitrates between perceptual integration and segregation based on top-down congruency expectations and bottom-up stimulus-bound congruency cues. Participants were presented audiovisual movies of phonologically congruent, incongruent or McGurk syllables that can be integrated into an illusory percept (e.g. "ti" percept for visual «ki» with auditory /pi/). They reported the syllable they perceived. Critically, we manipulated participants' top-down congruency expectations by presenting McGurk stimuli embedded in blocks of congruent or incongruent syllables. Behaviorally, participants were more likely to fuse audiovisual signals into an illusory McGurk percept in congruent than incongruent contexts. At the neural level, the left inferior frontal sulcus (lIFS) showed increased activations for bottom-up incongruent relative to congruent inputs. Moreover, lIFS activations were increased for physically identical McGurk stimuli, when participants segregated the audiovisual signals and reported their auditory percept. Critically, this activation increase for perceptual segregation was amplified when participants expected audiovisually incongruent signals based on prior sensory experience. Collectively, our results demonstrate that the lIFS combines top-down prior (in)congruency expectations with bottom-up (in)congruency cues to arbitrate between multisensory integration and segregation. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.

  15. The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech.

    PubMed

    Crosse, Michael J; Lalor, Edmund C

    2014-04-01

    Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.

  16. Organizational goal congruence and job attitudes revisited.

    DOT National Transportation Integrated Search

    1992-02-01

    Vancouver and Schmitt (1991) operationalized person-organization fit in terms of goal congruence and reported that goal congruence scores were positively related to favorable job attitudes. The purpose of the present study was to replicate and extend...

  17. Congruence and Welsh-English Code-Switching

    ERIC Educational Resources Information Center

    Deuchar, Margaret

    2005-01-01

    This paper aims to contribute to elucidating the notion of congruence in code-switching with particular reference to Welsh-English data. It has been suggested that a sufficient degree of congruence or equivalence between the constituents of one language and another is necessary in order for code-switching to take place. We shall distinguish…

  18. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  19. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  20. False recollections and the congruence of suggested information.

    PubMed

    Pérez-Mata, Nieves; Diges, Margarita

    2007-10-01

    In two experiments, congruence of postevent information was manipulated in order to explore its role in the misinformation effect. Congruence of a detail was empirically defined as its compatibility (or match) with a concrete event. Based on this idea it was predicted that a congruent suggested detail would be more easily accepted than an incongruent one. In Experiments 1 and 2 two factors(congruence and truth value ) were manipulated within-subjects, and a two-alternative forced-choice recognition test was used followed by phenomenological judgements. Furthermore, in the second experiment participants were asked to describe four critical items (two seen and two suggested details)to explore differences and similarities between real and unreal memories. Both experiments clearly showed that the congruence of false information caused a robust misinformation effect, so that congruent information was much more accepted than false incongruent information. Furthermore, congruence increased the descriptive and phenomenological similarities between perceived and suggested memories, thus contributing to the misleading effect.

  1. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  2. Factors Associated with Congruence Between Preferred and Actual Place of Death

    PubMed Central

    Bell, Christina L.; Somogyi-Zalud, Emese; Masaki, Kamal H.

    2009-01-01

    Congruence between preferred and actual place of death may be an essential component in terminal care. Most patients prefer a home death, but many patients do not die in their preferred location. Specialized (physician, hospice and palliative) home care visits may increase home deaths, but factors associated with congruence have not been systematically reviewed. This study sought to review the extent of congruence reported in the literature, and examine factors that may influence congruence. In July 2009, a comprehensive literature search was performed using MEDLINE, Psych Info, CINAHL, and Web of Science. Reference lists, related articles, and the past five years of six palliative care journals were also searched. Overall congruence rates (percentage of met preferences for all locations of death) were calculated for each study using reported data to allow cross-study comparison. Eighteen articles described 30% to 91% congruence. Eight specialized home care studies reported 59% to 91% congruence. A physician-led home care program reported 91% congruence. Of the 10 studies without specialized home care for all patients, seven reported 56% to 71% congruence and most reported unique care programs. Of the remaining three studies without specialized home care for all patients, two reported 43% to 46% congruence among hospital inpatients, and one elicited patient preference “if everything were possible,” with 30% congruence. Physician support, hospice enrollment, and family support improved congruence in multiple studies. Research in this important area must consider potential sources of bias, the method of eliciting patient preference, and the absence of a single ideal place of death. PMID:20116205

  3. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    PubMed

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  4. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps

    PubMed Central

    Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237

  5. Factors associated with congruence between preferred and actual place of death.

    PubMed

    Bell, Christina L; Somogyi-Zalud, Emese; Masaki, Kamal H

    2010-03-01

    Congruence between preferred and actual place of death may be an essential component in terminal care. Most patients prefer a home death, but many patients do not die in their preferred location. Specialized (physician, hospice, and palliative) home care visits may increase home deaths, but factors associated with congruence have not been systematically reviewed. This study sought to review the extent of congruence reported in the literature and examine factors that may influence congruence. In July 2009, a comprehensive literature search was performed using MEDLINE, PsychInfo, CINAHL, and Web of Science. Reference lists, related articles, and the past five years of six palliative care journals were also searched. Overall congruence rates (percentage of met preferences for all locations of death) were calculated for each study using reported data to allow cross-study comparison. Eighteen articles described 30%-91% congruence. Eight specialized home care studies reported 59%-91% congruence. A physician-led home care program reported 91% congruence. Of the 10 studies without specialized home care for all patients, seven reported 56%-71% congruence and most reported unique care programs. Of the remaining three studies without specialized home care for all patients, two reported 43%-46% congruence among hospital inpatients, and one elicited patient preference "if everything were possible," with 30% congruence. Physician support, hospice enrollment, and family support improved congruence in multiple studies. Research in this important area must consider potential sources of bias, the method of eliciting patient preference, and the absence of a single ideal place of death. (c) 2010 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  6. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  7. 29 CFR 2.12 - Audiovisual coverage permitted.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  8. Are Tutor Behaviors in Problem-Based Learning Stable? A Generalizability Study of Social Congruence, Expertise and Cognitive Congruence

    ERIC Educational Resources Information Center

    Williams, Judith C.; Alwis, W. A. M.; Rotgans, Jerome I.

    2011-01-01

    The purpose of this study was to investigate the stability of three distinct tutor behaviors (1) use of subject-matter expertise, (2) social congruence and (3) cognitive congruence, in a problem-based learning (PBL) environment. The data comprised the input from 16,047 different students to a survey of 762 tutors administered in three consecutive…

  9. Future-saving audiovisual content for Data Science: Preservation of geoinformatics video heritage with the TIB|AV-Portal

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Plank, Margret; Ziedorn, Frauke

    2015-04-01

    of Science and Technology. The web-based portal allows for extended search capabilities based on enhanced metadata derived by automated video analysis. By combining state-of-the-art multimedia retrieval techniques such as speech-, text-, and image recognition with semantic analysis, content-based access to videos at the segment level is provided. Further, by using the open standard Media Fragment Identifier (MFID), a citable Digital Object Identifier is displayed for each video segment. In addition to the continuously growing footprint of contemporary content, the importance of vintage audiovisual information needs to be considered: This paper showcases the successful application of the TIB|AV-Portal in the preservation and provision of a newly discovered version of a GRASS GIS promotional video produced by US Army -Corps of Enginers Laboratory (US-CERL) in 1987. The video is provides insight into the constraints of the very early days of the GRASS GIS project, which is the oldest active Free and Open Source Software (FOSS) GIS project which has been active for over thirty years. GRASS itself has turned into a collaborative scientific platform and a repository of scientific peer-reviewed code and algorithm/knowledge hub for future generation of scientists [1]. This is a reference case for future preservation activities regarding semantic-enhanced Web 2.0 content from geospatial software projects within Academia and beyond. References: [1] Chemin, Y., Petras V., Petrasova, A., Landa, M., Gebbert, S., Zambelli, P., Neteler, M., Löwe, P.: GRASS GIS: a peer-reviewed scientific platform and future research Repository, Geophysical Research Abstracts, Vol. 17, EGU2015-8314-1, 2015 (submitted)

  10. Axial linear patellar displacement: a new measurement of patellofemoral congruence.

    PubMed

    Urch, Scott E; Tritle, Benjamin A; Shelbourne, K Donald; Gray, Tinker

    2009-05-01

    The tools for measuring the congruence angle with digital radiography software can be difficult to use; therefore, the authors sought to develop a new, easy, and reliable method for measuring patellofemoral congruence. The abstract goes here and covers two columns. The abstract goes The linear displacement measurement will correlate well with the congruence angle measurement. here and covers two columns. Cohort study (diagnosis); Level of evidence, 2. On Merchant view radiographs obtained digitally, the authors measured the congruence angle and a new linear displacement measurement on preoperative and postoperative radiographs of 31 patients who suffered unilateral patellar dislocations and 100 uninjured subjects. The linear displacement measurement was obtained by drawing a reference line across the medial and lateral trochlear facets. Perpendicular lines were drawn from the depth of the sulcus through the reference line and from the apex of the posterior tip of the patella through the reference line. The distance between the perpendicular lines was the linear displacement measurement. The measurements were obtained twice at different sittings. The observer was blinded as to the previous measurements to establish reliability. Measurements were compared to determine whether the linear displacement measurement correlated with congruence angle. Intraobserver reliability was above r(2) = .90 for all measurements. In patients with patellar dislocations, the mean congruence angle preoperatively was 33.5 degrees , compared with 12.1 mm for linear displacement (r(2) = .92). The mean congruence angle postoperatively was 11.2 degrees, compared with 4.0 mm for linear displacement (r(2) = .89). For normal subjects, the mean congruence angle was -3 degrees and the mean linear displacement was 0.2 mm. The linear displacement measurement was found to correlate with congruence angle measurements and may be an easy and useful tool for clinicians to evaluate patellofemoral

  11. Body Build Satisfaction and the Congruency of Body Build Perceptions.

    ERIC Educational Resources Information Center

    Hankins, Norman E.; Bailey, Roger C.

    1979-01-01

    Females were administered the somatotype rating scale. Satisfied subjects showed greater congruency between their own and wished-for body build, and greater congruency between their own and friend/date body builds, but less congruency between their own body build and the female stereotype. (Author/BEF)

  12. Measuring Stratigraphic Congruence Across Trees, Higher Taxa, and Time.

    PubMed

    O'Connor, Anne; Wills, Matthew A

    2016-09-01

    The congruence between the order of cladistic branching and the first appearance dates of fossil lineages can be quantified using a variety of indices. Good matching is a prerequisite for the accurate time calibration of trees, while the distribution of congruence indices across large samples of cladograms has underpinned claims about temporal and taxonomic patterns of completeness in the fossil record. The most widely used stratigraphic congruence indices are the stratigraphic consistency index (SCI), the modified Manhattan stratigraphic measure (MSM*), and the gap excess ratio (GER) (plus its derivatives; the topological GER and the modified GER). Many factors are believed to variously bias these indices, with several empirical and simulation studies addressing some subset of the putative interactions. This study combines both approaches to quantify the effects (on all five indices) of eight variables reasoned to constrain the distribution of possible values (the number of taxa, tree balance, tree resolution, range of first occurrence (FO) dates, center of gravity of FO dates, the variability of FO dates, percentage of extant taxa, and percentage of taxa with no fossil record). Our empirical data set comprised 647 published animal and plant cladograms spanning the entire Phanerozoic, and for these data we also modeled the effects of mean age of FOs (as a proxy for clade age), the taxonomic rank of the clade, and the higher taxonomic group to which it belonged. The center of gravity of FO dates had not been investigated hitherto, and this was found to correlate most strongly with some measures of stratigraphic congruence in our empirical study (top-heavy clades had better congruence). The modified GER was the index least susceptible to bias. We found significant differences across higher taxa for all indices; arthropods had lower congruence and tetrapods higher congruence. Stratigraphic congruence-however measured-also varied throughout the Phanerozoic, reflecting

  13. Congruency sequence effect without feature integration and contingency learning.

    PubMed

    Kim, Sanga; Cho, Yang Seok

    2014-06-01

    The magnitude of congruency effects, such as the flanker-compatibility effects, has been found to vary as a function of the congruency of the previous trial. Some studies have suggested that this congruency sequence effect is attributable to stimulus and/or response priming, and/or contingency learning, whereas other studies have suggested that the control process triggered by conflict modulates the congruency effect. The present study examined whether sequential modulation can occur without stimulus and response repetitions and contingency learning. Participants were asked to perform two color flanker-compatibility tasks alternately in a trial-by-trial manner, with four fingers of one hand in Experiment 1 and with the index and middle fingers of two hands in Experiment 2, to avoid stimulus and response repetitions and contingency learning. A significant congruency sequence effect was obtained between the congruencies of the two tasks in Experiment 1 but not in Experiment 2. These results provide evidence for the idea that the sequential modulation is, at least in part, an outcome of the top-down control process triggered by conflict, which is specific to response mode. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Measuring Stratigraphic Congruence Across Trees, Higher Taxa, and Time

    PubMed Central

    O'Connor, Anne; Wills, Matthew A.

    2016-01-01

    The congruence between the order of cladistic branching and the first appearance dates of fossil lineages can be quantified using a variety of indices. Good matching is a prerequisite for the accurate time calibration of trees, while the distribution of congruence indices across large samples of cladograms has underpinned claims about temporal and taxonomic patterns of completeness in the fossil record. The most widely used stratigraphic congruence indices are the stratigraphic consistency index (SCI), the modified Manhattan stratigraphic measure (MSM*), and the gap excess ratio (GER) (plus its derivatives; the topological GER and the modified GER). Many factors are believed to variously bias these indices, with several empirical and simulation studies addressing some subset of the putative interactions. This study combines both approaches to quantify the effects (on all five indices) of eight variables reasoned to constrain the distribution of possible values (the number of taxa, tree balance, tree resolution, range of first occurrence (FO) dates, center of gravity of FO dates, the variability of FO dates, percentage of extant taxa, and percentage of taxa with no fossil record). Our empirical data set comprised 647 published animal and plant cladograms spanning the entire Phanerozoic, and for these data we also modeled the effects of mean age of FOs (as a proxy for clade age), the taxonomic rank of the clade, and the higher taxonomic group to which it belonged. The center of gravity of FO dates had not been investigated hitherto, and this was found to correlate most strongly with some measures of stratigraphic congruence in our empirical study (top-heavy clades had better congruence). The modified GER was the index least susceptible to bias. We found significant differences across higher taxa for all indices; arthropods had lower congruence and tetrapods higher congruence. Stratigraphic congruence—however measured—also varied throughout the Phanerozoic

  15. Is early osteoarthritis associated with differences in joint congruence?

    PubMed Central

    Conconi, Michele; Halilaj, Eni; Castelli, Vincenzo Parenti; Crisco, Joseph J.

    2014-01-01

    Previous studies suggest that osteoarthritis (OA) is related to abnormal or excessive articular contact stress. The peak pressure resulting from an applied load is determined by many factors, among which is shape and relative position and orientation of the articulating surfaces or, referring to a more common nomenclature, joint congruence. It has been hypothesized that anatomical differences may be among the causes of OA. Individuals with less congruent joints would likely develop higher peak pressure and thus would be more exposed to the risk of OA onset. The aim of this work was to determine if the congruence of the first carpometacarpal (CMC) joint differs with the early onset of OA or with sex, as the female population has a higher incidence of OA. 59 without and 38 with early OA were CT-scanned with their dominant or arthritic hand in a neutral configuration. The proposed measure of joint congruence is both shape and size dependent. The correlation of joint congruence with pathology and sex was analyzed both before and after normalization for joint size. We found a significant correlation between joint congruence and sex due to the sex-related differences in size. The observed correlation disappeared after normalization. Although joint congruence increased with size, it did not correlate significantly with the onset of early OA. Differences in joint congruence in this population may not be a primary cause of OA onset or predisposition, at least for the CMC joint. PMID:25468667

  16. The Role of RT Carry-Over for Congruence Sequence Effects in Masked Priming

    ERIC Educational Resources Information Center

    Huber-Huber, Christoph; Ansorge, Ulrich

    2017-01-01

    The present study disentangles 2 sources of the congruence sequence effect with masked primes: congruence and response time of the previous trial (reaction time [RT] carry-over). Using arrows as primes and targets and a metacontrast masking procedure we found congruence as well as congruence sequence effects. In addition, congruence sequence…

  17. An Investigation of the Sampling Distribution of the Congruence Coefficient.

    ERIC Educational Resources Information Center

    Broadbooks, Wendy J.; Elmore, Patricia B.

    This study developed and investigated an empirical sampling distribution of the congruence coefficient. The effects of sample size, number of variables, and population value of the congruence coefficient on the sampling distribution of the congruence coefficient were examined. Sample data were generated on the basis of the common factor model and…

  18. Building Intuitive Arguments for the Triangle Congruence Conditions

    ERIC Educational Resources Information Center

    Piatek-Jimenez, Katrina

    2008-01-01

    The triangle congruence conditions are a central focus to nearly any course in Euclidean geometry. The author presents a hands-on activity that uses straws and pipe cleaners to explore and justify the triangle congruence conditions. (Contains 4 figures.)

  19. Geodesic congruences in warped spacetimes

    NASA Astrophysics Data System (ADS)

    Ghosh, Suman; Dasgupta, Anirvan; Kar, Sayan

    2011-04-01

    In this article, we explore the kinematics of timelike geodesic congruences in warped five-dimensional bulk spacetimes, with and without thick or thin branes. Beginning with geodesic flows in the Randall-Sundrum anti-de Sitter geometry without and with branes, we find analytical expressions for the expansion scalar and comment on the effects of including thin branes on its evolution. Later, we move on to congruences in more general warped bulk geometries with a cosmological thick brane and a time-dependent extra dimensional scale. Using analytical expressions for the velocity field, we interpret the expansion, shear and rotation (ESR) along the flows, as functions of the extra dimensional coordinate. The evolution of a cross-sectional area orthogonal to the congruence, as seen from a local observer’s point of view, is also shown graphically. Finally, the Raychaudhuri and geodesic equations in backgrounds with a thick brane are solved numerically in order to figure out the role of initial conditions (prescribed on the ESR) and spacetime curvature on the evolution of the ESR.

  20. The association between patient-therapist MATRIX congruence and treatment outcome.

    PubMed

    Mendlovic, Shlomo; Saad, Amit; Roll, Uri; Ben Yehuda, Ariel; Tuval-Mashiah, Rivka; Atzil-Slonim, Dana

    2018-03-14

    The present study aimed to examine the association between patient-therapist micro-level congruence/incongruence ratio and psychotherapeutic outcome. Nine good- and nine poor-outcome psychodynamic treatments (segregated by comparing pre- and post-treatment BDI-II) were analyzed (N = 18) moment by moment using the MATRIX (total number of MATRIX codes analyzed = 11,125). MATRIX congruence was defined as similar adjacent MATRIX codes. the congruence/incongruence ratio tended to increase as the treatment progressed only in good-outcome treatments. Progression of MATRIX codes' congruence/incongruence ratio is associated with good outcome of psychotherapy.

  1. Conflict Background Triggered Congruency Sequence Effects in Graphic Judgment Task

    PubMed Central

    Zhao, Liang; Wang, Yonghui

    2013-01-01

    Congruency sequence effects refer to the reduction of congruency effects when following an incongruent trial than following a congruent trial. The conflict monitoring account, one of the most influential contributions to this effect, assumes that the sequential modulations are evoked by response conflict. The present study aimed at exploring the congruency sequence effects in the absence of response conflict. We found congruency sequence effects occurred in graphic judgment task, in which the conflict stimuli acted as irrelevant information. The findings reveal that processing task-irrelevant conflict stimulus features could also induce sequential modulations of interference. The results do not support the interpretation of conflict monitoring and favor a feature integration account that the congruency sequence effects are attributed to the repetitions of stimulus and response features. PMID:23372766

  2. Predictors of symptom congruence among patients with acute myocardial infarction.

    PubMed

    Fox-Wasylyshyn, Susan

    2012-01-01

    The extent of congruence between one's symptom experience and preconceived ideas about the nature of myocardial infarction symptoms (ie, symptom congruence) can influence when acute myocardial infarction (AMI) patients seek medical care. Lengthy delays impede timely receipt of medical interventions and result in greater morbidity and mortality. However, little is known about the factors that contribute to symptom congruence. Hence, the purpose of this study was to examine how AMI patients' symptom experiences and patients' demographic and clinical characteristics contribute to symptom congruence. Secondary data analyses were performed on interview data that were collected from 135 AMI patients. Hierarchical multiple regression analyses were used to examine how specific symptom attributes and demographic and clinical characteristics contribute to symptom congruence. Chest pain/discomfort and other symptom variables (type and location) were included in step 1 of the analysis, whereas symptom severity and demographic and clinical factors were included in step 2. In a second analysis, quality descriptors of discomfort replaced chest pain/discomfort in step 1. Although chest pain/discomfort, and quality descriptors of heaviness and cutting were significant in step 1 of their respective analyses, all became nonsignificant when the variables in step 2 were added to the analyses. Severe discomfort (β = .29, P < .001), history of AMI (β = .21, P < .01), and male sex (β = .17, P < .05) were significant predictors of symptom congruence in the first analysis. Only severe discomfort (β = .23, P < .01) and history of AMI (β = .17, P < .05) were predictive of symptom congruence in the second analysis. Although the location and quality of discomfort were important components of symptom congruence, symptom severity outweighed their importance. Nonsevere symptoms were less likely to meet the expectations of AMI symptoms by those experiencing this event. Those without a previous

  3. Is early osteoarthritis associated with differences in joint congruence?

    PubMed

    Conconi, Michele; Halilaj, Eni; Parenti Castelli, Vincenzo; Crisco, Joseph J

    2014-12-18

    Previous studies suggest that osteoarthritis (OA) is related to abnormal or excessive articular contact stress. The peak pressure resulting from an applied load is determined by many factors, among which is shape and relative position and orientation of the articulating surfaces or, referring to a more common nomenclature, joint congruence. It has been hypothesized that anatomical differences may be among the causes of OA. Individuals with less congruent joints would likely develop higher peak pressure and thus would be more exposed to the risk of OA onset. The aim of this work was to determine if the congruence of the first carpometacarpal (CMC) joint differs with the early onset of OA or with sex, as the female population has a higher incidence of OA. 59 without and 38 with early OA were CT-scanned with their dominant or arthritic hand in a neutral configuration. The proposed measure of joint congruence is both shape and size dependent. The correlation of joint congruence with pathology and sex was analyzed both before and after normalization for joint size. We found a significant correlation between joint congruence and sex due to the sex-related differences in size. The observed correlation disappeared after normalization. Although joint congruence increased with size, it did not correlate significantly with the onset of early OA. Differences in joint congruence in this population may not be a primary cause of OA onset or predisposition, at least for the CMC joint. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia.

    PubMed

    Francisco, Ana A; Jesse, Alexandra; Groen, Margriet A; McQueen, James M

    2017-01-01

    Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.

  5. Govt. Pubs: U.S. Government Produced Audiovisual Materials.

    ERIC Educational Resources Information Center

    Korman, Richard

    1981-01-01

    Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…

  6. Audiovisual perception in amblyopia: A review and synthesis.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  7. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  8. Balloons and bavoons versus spikes and shikes: ERPs reveal shared neural processes for shape-sound-meaning congruence in words, and shape-sound congruence in pseudowords.

    PubMed

    Sučević, Jelena; Savić, Andrej M; Popović, Mirjana B; Styles, Suzy J; Ković, Vanja

    2015-01-01

    There is something about the sound of a pseudoword like takete that goes better with a spiky, than a curvy shape (Köhler, 1929:1947). Yet despite decades of research into sound symbolism, the role of this effect on real words in the lexicons of natural languages remains controversial. We report one behavioural and one ERP study investigating whether sound symbolism is active during normal language processing for real words in a speaker's native language, in the same way as for novel word forms. The results indicate that sound-symbolic congruence has a number of influences on natural language processing: Written forms presented in a congruent visual context generate more errors during lexical access, as well as a chain of differences in the ERP. These effects have a very early onset (40-80 ms, 100-160 ms, 280-320 ms) and are later overshadowed by familiar types of semantic processing, indicating that sound symbolism represents an early sensory-co-activation effect. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Congruency sequence effect in cross-task context: evidence for dimension-specific modulation.

    PubMed

    Lee, Jaeyong; Cho, Yang Seok

    2013-11-01

    The congruency sequence effect refers to a reduced congruency effect after incongruent trials relative to congruent trials. This modulation is thought to be, at least in part, due to the control mechanisms resolving conflict. The present study examined the nature of the control mechanisms by having participants perform two different tasks in an alternating way. When participants performed horizontal and vertical Simon tasks in Experiment 1A, and horizontal and vertical spatial Stroop task in Experiment 1B, no congruency sequence effect was obtained between the task congruencies. When the Simon task and spatial Stroop task were performed with different response sets in Experiment 2, no congruency sequence effect was obtained. However, in Experiment 3, in which the participants performed the horizontal Simon and spatial Stroop tasks with an identical response set, a significant congruency sequence effect was obtained between the task congruencies. In Experiment 4, no congruency sequence effect was obtained when participants performed two tasks having different task-irrelevant dimensions with the identical response set. The findings suggest inhibitory processing between the task-irrelevant dimension and response mode after conflict. © 2013 Elsevier B.V. All rights reserved.

  10. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.

  11. Patient and caregiver congruence: the importance of dyads in heart failure care.

    PubMed

    Retrum, Jessica H; Nowels, Carolyn T; Bekelman, David B

    2013-01-01

    Informal (family) caregivers are integrally involved in chronic heart failure (HF) care. Few studies have examined HF patients and their informal caregiver as a unit in a relationship, or a dyad. Dyad congruence, or consistency in perspective, is relevant to numerous aspects of living with HF and HF care. Incongruence or lack of communication could impair disease management and advance care planning. The purpose of this qualitative study was to examine for congruence and incongruence between HF patients and their informal (family) caregivers. Secondary analyses examined the relationship of congruence to emotional distress and whether dyad relationship characteristics (eg, parent-child vs spouse) were associated with congruence. Thirty-four interviews consisting of HF patients and their current informal caregiver (N = 17 dyads) were conducted. Each dyad member was asked similar questions about managing HF symptoms, psychosocial care, and planning for the future. Interviews were transcribed and analyzed using the general inductive approach. Congruence, incongruence, and lack of communication between patients and caregivers were identified in areas such as managing illness, perceived care needs, perspectives about the future of HF, and end-of-life issues. Seven dyads were generally congruent, 4 were incongruent, and 6 demonstrated a combination of congruence and incongruence. Much of the tension and distress among dyads related to conflicting views about how emotions should be dealt with or expressed. Dyad relationship (parent-child vs spouse) was not clearly associated with congruence, although the relationship did appear to be related to perceived caregiving roles. Several areas of HF clinical and research relevance, including self-care, advance care planning, and communication, were affected by congruence. Further research is needed to define how congruence is related to other relationship characteristics, such as relationship quality, how congruence can best be

  12. Therapeutic bond judgments: Congruence and incongruence.

    PubMed

    Atzil-Slonim, Dana; Bar-Kalifa, Eran; Rafaeli, Eshkol; Lutz, Wolfgang; Rubel, Julian; Schiefele, Ann-Kathrin; Peri, Tuvia

    2015-08-01

    The present study had 2 aims: (a) to implement West and Kenny's (2011) Truth-and-Bias model to simultaneously assess the temporal congruence and directional discrepancy between clients' and therapists' ratings of the bond facet of the therapeutic alliance, as they cofluctuate from session to session; and (b) to examine whether symptom severity and a personality disorder (PD) diagnosis moderate congruence and/or discrepancy. Participants included 213 clients treated by 49 therapists. At pretreatment, clients were assessed for a PD diagnosis and completed symptom measures. Symptom severity was also assessed at the beginning of each session, using client self-reports. Both clients and therapists rated the therapeutic bond at the end of each session. Therapists and clients exhibited substantial temporal congruence in their session-by-session bond ratings, but therapists' ratings tended to be lower than their clients' across sessions. Additionally, therapeutic dyads whose session-by-session ratings were more congruent also tended to have a larger directional discrepancy (clients' ratings being higher). Pretreatment symptom severity and PD diagnosis did not moderate either temporal congruence or discrepancy at the dyad level; however, during sessions when clients were more symptomatic, therapist and client ratings were both farther apart and tracked each other less closely. Our findings are consistent with a "better safe than sorry" pattern, which suggests that therapists are motivated to take a vigilant approach that may lead both to underestimation and to attunement to fluctuations in the therapeutic bond. (c) 2015 APA, all rights reserved).

  13. Lip movements affect infants' audiovisual speech perception.

    PubMed

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  14. Reduced efficiency of audiovisual integration for nonnative speech.

    PubMed

    Yi, Han-Gyol; Phelps, Jasmine E B; Smiljanic, Rajka; Chandrasekaran, Bharath

    2013-11-01

    The role of visual cues in native listeners' perception of speech produced by nonnative speakers has not been extensively studied. Native perception of English sentences produced by native English and Korean speakers in audio-only and audiovisual conditions was examined. Korean speakers were rated as more accented in audiovisual than in the audio-only condition. Visual cues enhanced word intelligibility for native English speech but less so for Korean-accented speech. Reduced intelligibility of Korean-accented audiovisual speech was associated with implicit visual biases, suggesting that listener-related factors partially influence the efficiency of audiovisual integration for nonnative speech perception.

  15. AUDIO-VISUAL INSTRUCTION, AN ADMINISTRATIVE HANDBOOK.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Education, Jefferson City.

    THIS HANDBOOK WAS DESIGNED FOR USE BY SCHOOL ADMINISTRATORS IN DEVELOPING A TOTAL AUDIOVISUAL (AV) PROGRAM. ATTENTION IS GIVEN TO THE IMPORTANCE OF AUDIOVISUAL MEDIA TO EFFECTIVE INSTRUCTION, ADMINISTRATIVE PERSONNEL REQUIREMENTS FOR AN AV PROGRAM, BUDGETING FOR AV INSTRUCTION, PROPER UTILIZATION OF AV MATERIALS, SELECTION OF AV EQUIPMENT AND…

  16. Audiovisual Media and Libraries. Selected Readings.

    ERIC Educational Resources Information Center

    Prostano, Emanuel T.

    The readings in this collection for students of library science provide an overview of what has been the neglected half of library science: the audiovisual media. The volume begins with a section dealing with some philosophical considerations and an overview of technological considerations. Following sections cover traditional audiovisual media…

  17. Environmental Congruence, Group Importance, and Well-Being among Paratroopers.

    ERIC Educational Resources Information Center

    Meir, Elchanan I.; Segal-Halevi, Anat

    2001-01-01

    Israeli paratroopers (n=267) completed measures of group importance, role satisfaction, vocational interests, and somatic complaints. Group importance correlated with satisfaction and somatic complaints; congruence with environment did not. Congruence interacted with group importance to enhance satisfaction. (Contains 29 references.) (SK)

  18. Spatio-temporal Dynamics of Audiovisual Speech Processing

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Wagner, Michael; Ponton, Curtis W.

    2007-01-01

    The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatio-temporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual /bα/, incongruent auditory /bα/ synchronized with visual /gα/, auditory-only /bα/, and visual-only /bα/ and /gα/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 milliseconds. The CDRs demonstrated complex spatio-temporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area (Miller and d'Esposito, 2005). The importance of spatio-temporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (< 100 msec) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 msec. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response. PMID:17920933

  19. Epistemological Belief Congruency in Mathematics between Vocational Technology Students and Their Instructors

    ERIC Educational Resources Information Center

    Schommer-Aikins, Marlene; Unruh, Susan; Morphew, Jason

    2015-01-01

    Three questions were addressed in this study. Is there evidence of epistemological beliefs congruency between students and their instructor? Do students' epistemological beliefs, students' epistemological congruence, or both predict mathematical anxiety? Do students' epistemological beliefs, students' epistemological congruence, or both predict…

  20. Cross-modal integration of polyphonic characters in Chinese audio-visual sentences: a MVPA study based on functional connectivity.

    PubMed

    Zhang, Zhengyi; Zhang, Gaoyan; Zhang, Yuanyuan; Liu, Hong; Xu, Junhai; Liu, Baolin

    2017-12-01

    This study aimed to investigate the functional connectivity in the brain during the cross-modal integration of polyphonic characters in Chinese audio-visual sentences. The visual sentences were all semantically reasonable and the audible pronunciations of the polyphonic characters in corresponding sentences contexts varied in four conditions. To measure the functional connectivity, correlation, coherence and phase synchronization index (PSI) were used, and then multivariate pattern analysis was performed to detect the consensus functional connectivity patterns. These analyses were confined in the time windows of three event-related potential components of P200, N400 and late positive shift (LPS) to investigate the dynamic changes of the connectivity patterns at different cognitive stages. We found that when differentiating the polyphonic characters with abnormal pronunciations from that with the appreciate ones in audio-visual sentences, significant classification results were obtained based on the coherence in the time window of the P200 component, the correlation in the time window of the N400 component and the coherence and PSI in the time window the LPS component. Moreover, the spatial distributions in these time windows were also different, with the recruitment of frontal sites in the time window of the P200 component, the frontal-central-parietal regions in the time window of the N400 component and the central-parietal sites in the time window of the LPS component. These findings demonstrate that the functional interaction mechanisms are different at different stages of audio-visual integration of polyphonic characters.

  1. The role of RT carry-over for congruence sequence effects in masked priming.

    PubMed

    Huber-Huber, Christoph; Ansorge, Ulrich

    2017-05-01

    The present study disentangles 2 sources of the congruence sequence effect with masked primes: congruence and response time of the previous trial (reaction time [RT] carry-over). Using arrows as primes and targets and a metacontrast masking procedure we found congruence as well as congruence sequence effects. In addition, congruence sequence effects decreased when RT carry-over was accounted for in a mixed model analysis, suggesting that RT carry-over contributes to congruence sequence effects in masked priming. Crucially, effects of previous trial congruence were not cancelled out completely indicating that RT carry-over and previous trial congruence are 2 sources feeding into the congruence sequence effect. A secondary task requiring response speed judgments demonstrated general awareness of response speed (Experiments 1), but removing this secondary task (Experiment 2) showed that RT carry-over effects were also present in single-task conditions. During (dual-task) prime-awareness test parts of both experiments, however, RT carry-over failed to modulate congruence effects, suggesting that some task sets of the participants can prevent the effect. The basic RT carry-over effects are consistent with the conflict adaptation account, with the adaptation to the statistics of the environment (ASE) model, and possibly with the temporal learning explanation. Additionally considering the task-dependence of RT carry-over, the results are most compatible with the conflict adaptation account. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  3. The Audiovisual Portfolio.

    ERIC Educational Resources Information Center

    Williams, Eugene

    1979-01-01

    Describes the development of an audiovisual portfolio, consisting of a student teaching notebook, slide narrative presentation, audiotapes, and a videotape-- valuable for prospective teachers in job interviews. (CMV)

  4. Host and parasite morphology influence congruence between host and parasite phylogenies.

    PubMed

    Sweet, Andrew D; Bush, Sarah E; Gustafsson, Daniel R; Allen, Julie M; DiBlasi, Emily; Skeen, Heather R; Weckstein, Jason D; Johnson, Kevin P

    2018-03-23

    Comparisons of host and parasite phylogenies often show varying degrees of phylogenetic congruence. However, few studies have rigorously explored the factors driving this variation. Multiple factors such as host or parasite morphology may govern the degree of phylogenetic congruence. An ideal analysis for understanding the factors correlated with congruence would focus on a diverse host-parasite system for increased variation and statistical power. In this study, we focused on the Brueelia-complex, a diverse and widespread group of feather lice that primarily parasitise songbirds. We generated a molecular phylogeny of the lice and compared this tree with a phylogeny of their avian hosts. We also tested for the contribution of each host-parasite association to the overall congruence. The two trees overall were significantly congruent, but the contribution of individual associations to this congruence varied. To understand this variation, we developed a novel approach to test whether host, parasite or biogeographic factors were statistically associated with patterns of congruence. Both host plumage dimorphism and parasite ecomorphology were associated with patterns of congruence, whereas host body size, other plumage traits and biogeography were not. Our results lay the framework for future studies to further elucidate how these factors influence the process of host-parasite coevolution. Copyright © 2018 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.

  5. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... audiovisuals. 3015.200 Section 3015.200 Agriculture Regulations of the Department of Agriculture (Continued... Miscellaneous § 3015.200 Acknowledgement of support on publications and audiovisuals. (a) Definitions. Appendix A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications...

  6. Audiovisual quality evaluation of low-bitrate video

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Faller, Christof

    2005-03-01

    Audiovisual quality assessment is a relatively unexplored topic. We designed subjective experiments for audio, video, and audiovisual quality using content and encoding parameters representative of video for mobile applications. Our focus were the MPEG-4 AVC (a.k.a. H.264) and AAC coding standards. Our goals in this study are two-fold: we want to understand the interactions between audio and video in terms of perceived audiovisual quality, and we use the subjective data to evaluate the prediction performance of our non-reference video and audio quality metrics.

  7. Workplace Congruence and Occupational Outcomes among Social Service Workers.

    PubMed

    Graham, John R; Shier, Micheal L; Nicholas, David

    2016-06-01

    Workplace expectations reflect an important consideration in employee experience. A higher prevalence of workplace congruence between worker and employer expectations has been associated with higher levels of productivity and overall workplace satisfaction across multiple occupational groups. Little research has investigated the relationship between workplace congruence and occupational health outcomes among social service workers. This study sought to better understand the extent to which occupational congruence contributes to occupational outcomes by surveying unionised social service workers ( n = 674) employed with the Government of Alberta, Canada. Multiple regression analysis shows that greater congruence between workplace and worker expectations around workloads, workplace values and the quality of the work environment significantly: (i) decreases symptoms related to distress and secondary traumatic stress; (ii) decreases intentions to leave; and (iii) increases overall life satisfaction. The findings provide some evidence of areas within the workplace of large government run social welfare programmes that can be better aligned to worker expectations to improve occupational outcomes among social service workers.

  8. Workplace Congruence and Occupational Outcomes among Social Service Workers

    PubMed Central

    Graham, John R.; Shier, Micheal L.; Nicholas, David

    2016-01-01

    Workplace expectations reflect an important consideration in employee experience. A higher prevalence of workplace congruence between worker and employer expectations has been associated with higher levels of productivity and overall workplace satisfaction across multiple occupational groups. Little research has investigated the relationship between workplace congruence and occupational health outcomes among social service workers. This study sought to better understand the extent to which occupational congruence contributes to occupational outcomes by surveying unionised social service workers (n = 674) employed with the Government of Alberta, Canada. Multiple regression analysis shows that greater congruence between workplace and worker expectations around workloads, workplace values and the quality of the work environment significantly: (i) decreases symptoms related to distress and secondary traumatic stress; (ii) decreases intentions to leave; and (iii) increases overall life satisfaction. The findings provide some evidence of areas within the workplace of large government run social welfare programmes that can be better aligned to worker expectations to improve occupational outcomes among social service workers. PMID:27559216

  9. Interactions between mood and the structure of semantic memory: event-related potentials evidence

    PubMed Central

    Pinheiro, Ana P.; del Re, Elisabetta; Nestor, Paul G; McCarley, Robert W.; Gonçalves, Óscar F.

    2013-01-01

    Recent evidence suggests that affect acts as modulator of cognitive processes and in particular that induced mood has an effect on the way semantic memory is used on-line. We used event-related potentials (ERPs) to examine affective modulation of semantic information processing under three different moods: neutral, positive and negative. Fifteen subjects read 324 pairs of sentences, after mood induction procedure with 30 pictures of neutral, 30 pictures of positive and 30 pictures of neutral valence: 108 sentences were read in each mood induction condition. Sentences ended with three word types: expected words, within-category violations, and between-category violations. N400 amplitude was measured to the three word types under each mood induction condition. Under neutral mood, a congruency (more negative N400 amplitude for unexpected relative to expected endings) and a category effect (more negative N400 amplitude for between- than to within-category violations) were observed. Also, results showed differences in N400 amplitude for both within- and between-category violations as a function of mood: while positive mood tended to facilitate the integration of unexpected but related items, negative mood made their integration as difficult as unexpected and unrelated items. These findings suggest the differential impact of mood on access to long-term semantic memory during sentence comprehension. PMID:22434931

  10. Interactions between mood and the structure of semantic memory: event-related potentials evidence.

    PubMed

    Pinheiro, Ana P; del Re, Elisabetta; Nestor, Paul G; McCarley, Robert W; Gonçalves, Óscar F; Niznikiewicz, Margaret

    2013-06-01

    Recent evidence suggests that affect acts as modulator of cognitive processes and in particular that induced mood has an effect on the way semantic memory is used on-line. We used event-related potentials (ERPs) to examine affective modulation of semantic information processing under three different moods: neutral, positive and negative. Fifteen subjects read 324 pairs of sentences, after mood induction procedure with 30 pictures of neutral, 30 pictures of positive and 30 pictures of neutral valence: 108 sentences were read in each mood induction condition. Sentences ended with three word types: expected words, within-category violations, and between-category violations. N400 amplitude was measured to the three word types under each mood induction condition. Under neutral mood, a congruency (more negative N400 amplitude for unexpected relative to expected endings) and a category effect (more negative N400 amplitude for between- than to within-category violations) were observed. Also, results showed differences in N400 amplitude for both within- and between-category violations as a function of mood: while positive mood tended to facilitate the integration of unexpected but related items, negative mood made their integration as difficult as unexpected and unrelated items. These findings suggest the differential impact of mood on access to long-term semantic memory during sentence comprehension.

  11. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  12. Effects of Worker Classification, Crystallization, and Job Autonomy on Congruence-Satisfaction Relationships.

    ERIC Educational Resources Information Center

    Obermesik, John W.; Beehr, Terry A.

    A majority of the congruence-satisfaction literature has used interest measures based on Holland's theory, although the measures' accuracy in predicting job satisfaction is questionable. Divergent findings among studies on occupational congruence-job satisfaction may be due to ineffective measures of congruence and job satisfaction and lack of…

  13. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  14. The Semantic Distance Task: Quantifying Semantic Distance with Semantic Network Path Length

    ERIC Educational Resources Information Center

    Kenett, Yoed N.; Levi, Effi; Anaki, David; Faust, Miriam

    2017-01-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We…

  15. The effect of congruence in patient and therapist alliance on patient's symptomatic levels.

    PubMed

    Zilcha-Mano, Sigal; Snyder, John; Silberschatz, George

    2017-05-01

    The ability of alliance to predict outcome has been widely demonstrated, but less is known about the effect of the level of congruence between patient and therapist alliance ratings on outcome. In the current study we examined whether the degree of congruence between patient and therapist alliance ratings can predict symptomatic levels 1 month later in treatment. The sample consisted of 127 patient-therapist dyads. Patients and therapists reported on their alliance levels, and patients reported their symptomatic levels 1 month later. Polynomial regression and response surface analysis were used to examine congruence. Findings suggest that when the congruence level of patient and therapist alliance ratings was not taken into account, only the therapist's alliance served as a significant predictor of symptomatic levels. But when the degree of congruence between patient and therapist alliance ratings was considered, the degree of congruence was a significant predictor of symptomatic levels 1 month later in treatment. Findings support the importance of the level of congruence between patient and therapist alliance ratings in predicting patient's symptomatic levels.

  16. Cross-taxon congruence and environmental conditions.

    PubMed

    Toranza, Carolina; Arim, Matías

    2010-07-16

    Diversity patterns of different taxa typically covary in space, a phenomenon called cross-taxon congruence. This pattern has been explained by the effect of one taxon diversity on taxon diversity, shared biogeographic histories of different taxa, and/or common responses to environmental conditions. A meta-analysis of the association between environment and diversity patterns found that in 83 out of 85 studies, more than 60% of the spatial variability in species richness was related to variables representing energy, water or their interaction. The role of the environment determining taxa diversity patterns leads us to hypothesize that this would explain the observed cross-taxon congruence. However, recent analyses reported the persistence of cross-taxon congruence when environmental effect was statistically removed. Here we evaluate this hypothesis, analyzing the cross-taxon congruence between birds and mammals in the Brazilian Cerrado, and assess the environmental role on the spatial covariation in diversity patterns. We found a positive association between avian and mammal richness and a positive latitudinal trend for both groups in the Brazilian Cerrado. Regression analyses indicated an effect of latitude, PET, and mean temperature over both biological groups. In addition, we show that NDVI was only associated with avian diversity; while the annual relative humidity, was only correlated with mammal diversity. We determined the environmental effects on diversity in a path analysis that accounted for 73% and 76% of the spatial variation in avian and mammal richness. However, an association between avian and mammal diversity remains significant. Indeed, the importance of this link between bird and mammal diversity was also supported by a significant association between birds and mammal spatial autoregressive model residuals. Our study corroborates the main role of environmental conditions on diversity patterns, but suggests that other important mechanisms, which

  17. AUDIOVISUAL HANDBOOK.

    ERIC Educational Resources Information Center

    JOHNSON, HARRY A.

    UNDERGRADUATE AND GRADUATE ACADEMIC OFFERINGS IN THE DEPARTMENT OF AUDIOVISUAL EDUCATION ARE LISTED, AND THE INSERVICE FACULTY TRAINING PROGRAM AND THE EXTENSION AND CONSULTANT SERVICES ARE DESCRIBED. GENERAL SERVICES OFFERED BY THE CENTER ARE A COLLEGE FILM SHOWING SERVICE, A CHILDREN'S THEATRE, A PRODUCTION WORKSHOP, AN EMBOSOGRAF PROCESS,…

  18. Speech Cues Contribute to Audiovisual Spatial Integration

    PubMed Central

    Bishop, Christopher W.; Miller, Lee M.

    2011-01-01

    Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways. PMID:21909378

  19. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    PubMed

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  20. Audiovisual Interval Size Estimation Is Associated with Early Musical Training

    PubMed Central

    Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134

  1. Reduced audiovisual recalibration in the elderly.

    PubMed

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  2. Reduced audiovisual recalibration in the elderly

    PubMed Central

    Chan, Yu Man; Pianta, Michael J.; McKendrick, Allison M.

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22–32 years old) and 15 older (64–74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age. PMID:25221508

  3. Health center streamlines use of audiovisual aids.

    PubMed

    Brantz, M H

    1978-09-01

    Audiovisual aids and programs can be used to help provide effective and efficient in-hospital continuing education programs. The cost of audiovisual equipment can be minimized and use can be maximized by implementing standardization policies that identify and simplify the number and types of equipment to be purchased.

  4. In Focus: Alcohol and Alcoholism Audiovisual Guide.

    ERIC Educational Resources Information Center

    National Clearinghouse for Alcohol Information (DHHS), Rockville, MD.

    This guide reviews audiovisual materials currently available on alcohol abuse and alcoholism. An alphabetical index of audiovisual materials is followed by synopses of the indexed materials. Information about the intended audience, price, rental fee, and distributor is included. This guide also provides a list of publications related to media…

  5. Audiovisual Materials.

    ERIC Educational Resources Information Center

    American Council on Education, Washington, DC. HEATH/Closer Look Resource Center.

    The fact sheet presents a suggested evaluation framework for use in previewing audiovisual materials, a list of selected resources, and an annotated list of films which were shown at the AHSSPPE '83 Media Fair as part of the national conference of the Association on Handicapped Student Service Programs in Postsecondary Education. Evaluation…

  6. Multistage audiovisual integration of speech: dissociating identification and detection.

    PubMed

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  7. The semantic distance task: Quantifying semantic distance with semantic network path length.

    PubMed

    Kenett, Yoed N; Levi, Effi; Anaki, David; Faust, Miriam

    2017-09-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We propose a novel approach to computing semantic distance, based on network science methodology. Path length in a semantic network represents the amount of steps needed to traverse from 1 word in the network to the other. We examine whether path length can be used as a measure of semantic distance, by investigating how path length affect performance in a semantic relatedness judgment task and recall from memory. Our results show a differential effect on performance: Up to 4 steps separating between word-pairs, participants exhibit an increase in reaction time (RT) and decrease in the percentage of word-pairs judged as related. From 4 steps onward, participants exhibit a significant decrease in RT and the word-pairs are dominantly judged as unrelated. Furthermore, we show that as path length between word-pairs increases, success in free- and cued-recall decreases. Finally, we demonstrate how our measure outperforms computational methods measuring semantic distance (LSA and positive pointwise mutual information) in predicting participants RT and subjective judgments of semantic strength. Thus, we provide a computational alternative to computing semantic distance. Furthermore, this approach addresses key issues in cognitive theory, namely the breadth of the spreading activation process and the effect of semantic distance on memory retrieval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Emotional congruence with children and sexual offending against children: a meta-analytic review.

    PubMed

    McPhail, Ian V; Hermann, Chantal A; Nunes, Kevin L

    2013-08-01

    Emotional congruence with children is an exaggerated affective and cognitive affiliation with children that is posited to be involved in the initiation and maintenance of sexual offending against children. The current meta-analysis examined the relationship between emotional congruence with children and sexual offending against children, sexual recidivism, and change following sexual offender treatment. A systematic literature review of online academic databases, conference proceedings, governmental agency websites, and article, book chapter, and book reference lists was performed. Thirty studies on emotional congruence with children in sexual offenders against children (SOC) were included in a random-effects meta-analysis. Extrafamilial SOC-especially those with male victims--evidenced higher emotional congruence with children than most non--SOC comparison groups and intrafamilial SOC. In contrast, intrafamilial SOC evidenced less emotional congruence with children than many of the non-SOC comparison groups. Higher levels of emotional congruence with children were associated with moderately higher rates of sexual recidivism. The association between emotional congruence with children and sexual recidivism was significantly stronger in extrafamilial SOC samples (d = 0.58, 95% CI [0.31, 0.85]) compared with intrafamilial SOC samples (d = -0.15, 95% CI [-0.58, 0.27]). Similarly, emotional congruence with children showed a significant reduction from pre- to posttreatment for extrafamilial SOC (d = 0.41, 95% CI [0.33, 0.85]), but not for intrafamilial SOC (d = 0.06, 95% CI [-0.10, 0.22]). Emotional congruence with children is a characteristic of extrafamilial SOC, is moderately predictive of sexual recidivism, and is potentially amenable through treatment efforts. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. Catalog of Audiovisual Materials Related to Rehabilitation.

    ERIC Educational Resources Information Center

    Mann, Joe, Ed.; Henderson, Jim, Ed.

    An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…

  10. Attention Modulation by Proportion Congruency: The Asymmetrical List Shifting Effect

    ERIC Educational Resources Information Center

    Abrahamse, Elger L.; Duthoo, Wout; Notebaert, Wim; Risko, Evan F.

    2013-01-01

    Proportion congruency effects represent hallmark phenomena in current theorizing about cognitive control. This is based on the notion that proportion congruency determines the relative levels of attention to relevant and irrelevant information in conflict tasks. However, little empirical evidence exists that uniquely supports such an attention…

  11. Methodological review: measured and reported congruence between preferred and actual place of death.

    PubMed

    Bell, C L; Somogyi-Zalud, E; Masaki, K H

    2009-09-01

    Congruence between preferred and actual place of death is an important palliative care outcome reported in the literature. We examined methods of measuring and reporting congruence to highlight variations impairing cross-study comparisons. Medline, PsychInfo, CINAHL, and Web of Science were systematically searched for clinical research studies examining patient preference and congruence as an outcome. Data were extracted into a matrix, including purpose, reported congruence, and method for eliciting preference. Studies were graded for quality. Using tables of preferred versus actual places of death, an overall congruence (total met preferences out of total preferences) and a kappa statistic of agreement were determined for each study. Twelve studies were identified. Percentage of congruence was reported using four different definitions. Ten studies provided a table or partial table of preferred versus actual deaths for each place. Three studies provided kappa statistics. No study achieved better than moderate agreement when analysed using kappa statistics. A study which elicited ideal preference reported the lowest agreement, while longitudinal studies reporting final preferred place of death yielded the highest agreement (moderate agreement). Two other studies of select populations also yielded moderate agreement. There is marked variation in methods of eliciting and reporting congruence, even among studies focused on congruence as an outcome. Cross-study comparison would be enhanced by the use of similar questions to elicit preference, tables of preferred versus actual places of death, and kappa statistics of agreement.

  12. Global meta-analysis reveals low consistency of biodiversity congruence relationships.

    PubMed

    Westgate, Martin J; Barton, Philip S; Lane, Peter W; Lindenmayer, David B

    2014-05-21

    Knowledge of the number and distribution of species is fundamental to biodiversity conservation efforts, but this information is lacking for the majority of species on earth. Consequently, subsets of taxa are often used as proxies for biodiversity; but this assumes that different taxa display congruent distribution patterns. Here we use a global meta-analysis to show that studies of cross-taxon congruence rarely give consistent results. Instead, species richness congruence is highest at extreme spatial scales and close to the equator, while congruence in species composition is highest at large extents and grain sizes. Studies display highest variance in cross-taxon congruence when conducted in areas with dissimilar areal extents (for species richness) or latitudes (for species composition). These results undermine the assumption that a subset of taxa can be representative of biodiversity. Therefore, researchers whose goal is to prioritize locations or actions for conservation should use data from a range of taxa.

  13. Clock synchronization by accelerated observers - Metric construction for arbitrary congruences of world lines

    NASA Technical Reports Server (NTRS)

    Henriksen, R. N.; Nelson, L. A.

    1985-01-01

    Clock synchronization in an arbitrarily accelerated observer congruence is considered. A general solution is obtained that maintains the isotropy and coordinate independence of the one-way speed of light. Attention is also given to various particular cases including, rotating disk congruence or ring congruence. An explicit, congruence-based spacetime metric is constructed according to Einstein's clock synchronization procedure and the equation for the geodesics of the space-time was derived using Hamilton-Jacobi method. The application of interferometric techniques (absolute phase radio interferometry, VLBI) to the detection of the 'global Sagnac effect' is also discussed.

  14. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    PubMed

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p < 0.01). The response to audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p < 0.001). Additionally, audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  15. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease

    PubMed Central

    Yang, Weiping; Ren, Yanling; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p < 0.01). The response to audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD (p < 0.001). Additionally, audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD. PMID:29850014

  16. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    NASA Astrophysics Data System (ADS)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  17. Congruency of scapula locking plates: implications for implant design.

    PubMed

    Park, Andrew Y; DiStefano, James G; Nguyen, Thuc-Quyen; Buckley, Jenni M; Montgomery, William H; Grimsrud, Chris D

    2012-04-01

    We conducted a study to evaluate the congruency of fit of current scapular plate designs. Three-dimensional image-processing and -analysis software, and computed tomography scans of 12 cadaveric scapulae were used to generate 3 measurements: mean distance from plate to bone, maximum distance, and percentage of plate surface within 2 mm of bone. These measurements were used to quantify congruency. The scapular spine plate had the most congruent fit in all 3 measured variables. The lateral border and glenoid plates performed statistically as well as the scapular spine plate in at least 1 of the measured variables. The medial border plate had the least optimal measurements in all 3 variables. With locking-plate technology used in a wide variety of anatomical locations, the locking scapula plate system can allow for a fixed-angle construct in this region. Our study results showed that the scapular spine, glenoid, and lateral border plates are adequate in terms of congruency. However, design improvements may be necessary for the medial border plate. In addition, we describe a novel method for quantifying hardware congruency, a method that can be applied to any anatomical location.

  18. Thumb carpometacarpal joint congruence during functional tasks and thumb range-of-motion activities

    PubMed Central

    Halilaj, Eni; Moore, Douglas C; Patel, Tarpit K; Laidlaw, David H; Ladd, Amy L; Weiss, Arnold-Peter C; Crisco, Joseph J

    2017-01-01

    Joint incongruity is often cited as a possible etiological factor for the high incidence of thumb carpometacarpal (CMC) joint osteoarthritis (OA) in older women. There is evidence suggesting that biomechanics plays a role in CMC OA progression, but little is known about how CMC joint congruence, specifically, differs among different cohorts. The purpose of this in vivo study was to determine if CMC joint congruence differs with sex, age, and early stage OA for different thumb positions. Using CT data from 155 subjects and a congruence metric that is based on both articular morphology and joint posture, we did not find any differences in CMC joint congruence with sex or age group, but found that patients in the early stages of OA exhibit lower congruence than healthy subjects of the same age group. PMID:25570956

  19. Thumb carpometacarpal joint congruence during functional tasks and thumb range-of-motion activities.

    PubMed

    Halilaj, Eni; Moore, Douglas C; Patel, Tarpit K; Laidlaw, David H; Ladd, Amy L; Weiss, Arnold-Peter C; Crisco, Joseph J

    2014-01-01

    Joint incongruity is often cited as a possible etiological factor for the high incidence of thumb carpometacarpal (CMC) joint osteoarthritis (OA) in older women. There is evidence suggesting that biomechanics plays a role in CMC OA progression, but little is known about how CMC joint congruence, specifically, differs among different cohorts. The purpose of this in vivo study was to determine if CMC joint congruence differs with sex, age, and early stage OA for different thumb positions. Using CT data from 155 subjects and a congruence metric that is based on both articular morphology and joint posture, we did not find any differences in CMC joint congruence with sex or age group, but found that patients in the early stages of OA exhibit lower congruence than healthy subjects of the same age group.

  20. The Practical Audio-Visual Handbook for Teachers.

    ERIC Educational Resources Information Center

    Scuorzo, Herbert E.

    The use of audio/visual media as an aid to instruction is a common practice in today's classroom. Most teachers, however, have little or no formal training in this field and rarely a knowledgeable coordinator to help them. "The Practical Audio-Visual Handbook for Teachers" discusses the types and mechanics of many of these media forms and proposes…

  1. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  2. An audiovisual BCI system for assisting clinical communication assessment in patients with disorders of consciousness: a case study.

    PubMed

    Fei Wang; Yanbin He; Jun Qu; Qiuyou Xie; Qing Lin; Xiaoxiao Ni; Yan Chen; Ronghao Yu; Chin-Teng Lin; Yuanqing Li

    2016-08-01

    The JFK Coma Recovery Scale-Revised (JFK CRS-R), a behavioral scale, is often used for clinical assessments of patients with disorders of consciousness (DOC), such as patients in a vegetative state. However, there has been a high rate of clinical misdiagnosis with the JFK CRS-R because patients with severe brain injures cannot provide sufficient behavioral responses. It is particularly difficult to evaluate the communication function in DOC patients using the JFK CRS-R because a higher level of behavioral responses is needed for communication assessments than for many other assessments, such as an auditory startle assessment. Brain-computer interfaces (BCIs), which provide control and communication by detecting changes in brain signals, can be used to evaluate patients with DOC without the need of behavioral expressions. In this paper, we proposed an audiovisual BCI system to supplement the JFK CRS-R in assessing the communication ability of patients with DOC. In the graphic user interface of the BCI system, two word buttons ("Yes" and "No" in Chinese) were randomly displayed in the left and right sides and flashed in an alternating manner. When a word button flashed, its corresponding spoken word was broadcast from an ipsilateral headphone. The use of semantically congruent audiovisual stimuli improves the detection performance of the BCI system. Similar to the JFK CRS-R, several situation-orientation questions were presented one by one to patients with DOC. For each question, the patient was required to provide his/her answer by selectively focusing on an audiovisual stimulus (audiovisual "Yes" or "No"). As a case study, we applied our BCI system in a patient with DOC who was clinically diagnosed as being in a minimally conscious state (MCS). According to the JFK CRS-R assessment, this patient was unable to communicate consistently. However, he achieved a high accuracy of 86.5% in our BCI experiment. This result indicates his reliable communication ability and

  3. Rapid temporal recalibration is unique to audiovisual stimuli.

    PubMed

    Van der Burg, Erik; Orchard-Mills, Emily; Alais, David

    2015-01-01

    Following prolonged exposure to asynchronous multisensory signals, the brain adapts to reduce the perceived asynchrony. Here, in three separate experiments, participants performed a synchrony judgment task on audiovisual, audiotactile or visuotactile stimuli and we used inter-trial analyses to examine whether temporal recalibration occurs rapidly on the basis of a single asynchronous trial. Even though all combinations used the same subjects, task and design, temporal recalibration occurred for audiovisual stimuli (i.e., the point of subjective simultaneity depended on the preceding trial's modality order), but none occurred when the same auditory or visual event was combined with a tactile event. Contrary to findings from prolonged adaptation studies showing recalibration for all three combinations, we show that rapid, inter-trial recalibration is unique to audiovisual stimuli. We conclude that recalibration occurs at two different timescales for audiovisual stimuli (fast and slow), but only on a slow timescale for audiotactile and visuotactile stimuli.

  4. Fly Photoreceptors Encode Phase Congruency

    PubMed Central

    Friederich, Uwe; Billings, Stephen A.; Hardie, Roger C.; Juusola, Mikko; Coca, Daniel

    2016-01-01

    More than five decades ago it was postulated that sensory neurons detect and selectively enhance behaviourally relevant features of natural signals. Although we now know that sensory neurons are tuned to efficiently encode natural stimuli, until now it was not clear what statistical features of the stimuli they encode and how. Here we reverse-engineer the neural code of Drosophila photoreceptors and show for the first time that photoreceptors exploit nonlinear dynamics to selectively enhance and encode phase-related features of temporal stimuli, such as local phase congruency, which are invariant to changes in illumination and contrast. We demonstrate that to mitigate for the inherent sensitivity to noise of the local phase congruency measure, the nonlinear coding mechanisms of the fly photoreceptors are tuned to suppress random phase signals, which explains why photoreceptor responses to naturalistic stimuli are significantly different from their responses to white noise stimuli. PMID:27336733

  5. Attributes of quality in audiovisual materials for health professionals.

    PubMed

    Suter, E; Waddell, W H

    1981-07-01

    Utilizing a series of meetings and incorporating individual efforts of producers, evaluators, and users of audiovisual materials; an attempt has been made to define the quality of an instructional item. Attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials are addressed through questions about general criteria that permit expression of individual dictates off creativity and taste. These attributes of quality are intended for use by the producers and evaluators of audiovisual instruction.

  6. Partner choice, relationship satisfaction, and oral contraception: the congruency hypothesis.

    PubMed

    Roberts, S Craig; Little, Anthony C; Burriss, Robert P; Cobey, Kelly D; Klapilová, Kateřina; Havlíček, Jan; Jones, Benedict C; DeBruine, Lisa; Petrie, Marion

    2014-07-01

    Hormonal fluctuation across the menstrual cycle explains temporal variation in women's judgment of the attractiveness of members of the opposite sex. Use of hormonal contraceptives could therefore influence both initial partner choice and, if contraceptive use subsequently changes, intrapair dynamics. Associations between hormonal contraceptive use and relationship satisfaction may thus be best understood by considering whether current use is congruent with use when relationships formed, rather than by considering current use alone. In the study reported here, we tested this congruency hypothesis in a survey of 365 couples. Controlling for potential confounds (including relationship duration, age, parenthood, and income), we found that congruency in current and previous hormonal contraceptive use, but not current use alone, predicted women's sexual satisfaction with their partners. Congruency was not associated with women's nonsexual satisfaction or with the satisfaction of their male partners. Our results provide empirical support for the congruency hypothesis and suggest that women's sexual satisfaction is influenced by changes in partner preference associated with change in hormonal contraceptive use. © The Author(s) 2014.

  7. Audiovisual Integration in High Functioning Adults with Autism

    ERIC Educational Resources Information Center

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  8. How does information congruence influence diagnosis performance?

    PubMed

    Chen, Kejin; Li, Zhizhong

    2015-01-01

    Diagnosis performance is critical for the safety of high-consequence industrial systems. It depends highly on the information provided, perceived, interpreted and integrated by operators. This article examines the influence of information congruence (congruent information vs. conflicting information vs. missing information) and its interaction with time pressure (high vs. low) on diagnosis performance on a simulated platform. The experimental results reveal that the participants confronted with conflicting information spent significantly more time generating correct hypotheses and rated the results with lower probability values than when confronted with the other two levels of information congruence and were more prone to arrive at a wrong diagnosis result than when they were provided with congruent information. This finding stresses the importance of the proper processing of non-congruent information in safety-critical systems. Time pressure significantly influenced display switching frequency and completion time. This result indicates the decisive role of time pressure. Practitioner Summary: This article examines the influence of information congruence and its interaction with time pressure on human diagnosis performance on a simulated platform. For complex systems in the process control industry, the results stress the importance of the proper processing of non-congruent information in safety-critical systems.

  9. A Profile-Based Framework for Factorial Similarity and the Congruence Coefficient.

    PubMed

    Hartley, Anselma G; Furr, R Michael

    2017-01-01

    We present a novel profile-based framework for understanding factorial similarity in the context of exploratory factor analysis in general, and for understanding the congruence coefficient (a commonly used index of factor similarity) specifically. First, we introduce the profile-based framework articulating factorial similarity in terms of 3 intuitive components: general saturation similarity, differential saturation similarity, and configural similarity. We then articulate the congruence coefficient in terms of these components, along with 2 additional profile-based components, and we explain how these components resolve ambiguities that can be-and are-found when using the congruence coefficient. Finally, we present secondary analyses revealing that profile-based components of factorial are indeed linked to experts' actual evaluations of factorial similarity. Overall, the profile-based approach we present offers new insights into the ways in which researchers can examine factor similarity and holds the potential to enhance researchers' ability to understand the congruence coefficient.

  10. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  11. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  12. Perceived synchrony for realistic and dynamic audiovisual events.

    PubMed

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  13. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  14. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models.

    PubMed

    Park, Gibeom; Tani, Jun

    2015-12-01

    The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Age-related audiovisual interactions in the superior colliculus of the rat.

    PubMed

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  17. Audiovisual Mass Media and Education. TTW 27/28.

    ERIC Educational Resources Information Center

    van Stapele, Peter, Ed.; Sutton, Clifford C., Ed.

    1989-01-01

    The 15 articles in this special issue focus on learning about the audiovisual mass media and education, especially television and film, in relation to various pedagogical and didactical questions. Individual articles are: (1) "Audiovisual Mass Media for Education in Pakistan: Problems and Prospects" (Ahmed Noor Kahn); (2) "The Role of the…

  18. The Audio-Visual Marketing Handbook for Independent Schools.

    ERIC Educational Resources Information Center

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  19. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    PubMed

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  20. Long-term music training modulates the recalibration of audiovisual simultaneity.

    PubMed

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  1. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.

  2. A System for the Semantic Multimodal Analysis of News Audio-Visual Content

    NASA Astrophysics Data System (ADS)

    Mezaris, Vasileios; Gidaros, Spyros; Papadopoulos, GeorgiosTh; Kasper, Walter; Steffen, Jörg; Ordelman, Roeland; Huijbregts, Marijn; de Jong, Franciska; Kompatsiaris, Ioannis; Strintzis, MichaelG

    2010-12-01

    News-related content is nowadays among the most popular types of content for users in everyday applications. Although the generation and distribution of news content has become commonplace, due to the availability of inexpensive media capturing devices and the development of media sharing services targeting both professional and user-generated news content, the automatic analysis and annotation that is required for supporting intelligent search and delivery of this content remains an open issue. In this paper, a complete architecture for knowledge-assisted multimodal analysis of news-related multimedia content is presented, along with its constituent components. The proposed analysis architecture employs state-of-the-art methods for the analysis of each individual modality (visual, audio, text) separately and proposes a novel fusion technique based on the particular characteristics of news-related content for the combination of the individual modality analysis results. Experimental results on news broadcast video illustrate the usefulness of the proposed techniques in the automatic generation of semantic annotations.

  3. Biomedical semantics in the Semantic Web

    PubMed Central

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences? We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  4. Biomedical semantics in the Semantic Web.

    PubMed

    Splendiani, Andrea; Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott

    2011-03-07

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th.

  5. Different Levels of Learning Interact to Shape the Congruency Sequence Effect

    ERIC Educational Resources Information Center

    Weissman, Daniel H.; Hawks, Zoë W.; Egner, Tobias

    2016-01-01

    The congruency effect in distracter interference tasks is often reduced after incongruent relative to congruent trials. Moreover, this "congruency sequence effect" (CSE) is influenced by learning related to concrete stimulus and response features as well as by learning related to abstract cognitive control processes. There is an ongoing…

  6. Elevated audiovisual temporal interaction in patients with migraine without aura

    PubMed Central

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  7. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    PubMed

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p < 0.05). Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  8. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Congruence between Culturally Competent Treatment and Cultural Needs of Older Latinos

    ERIC Educational Resources Information Center

    Costantino, Giuseppe; Malgady, Robert G.; Primavera, Louis H.

    2009-01-01

    This study investigated a new 2-factor construct, termed "cultural congruence", which is related to cultural competence in the delivery of mental health services to ethnic minority clients. Cultural congruence was defined as the distance between the cultural competence characteristics of the health care organization and the clients' perception of…

  10. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  11. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  12. The use of audiovisual aids in the relocation program.

    DOT National Transportation Integrated Search

    1979-01-01

    The report presents the findings of a study of an audiovisual slide presentation on the rights and benefits of persons relocated as a result of highway construction. The overall purpose of the study was to evaluate the audiovisual system used by the ...

  13. Catalogs of Audiovisual Materials: A Guide to Government Sources.

    ERIC Educational Resources Information Center

    Dale, Doris Cruger

    This annotated bibliography lists 53 federally published catalogs and bibliographies which identify films and other audiovisual materials produced or sponsored by government agencies; some also include commercially produced audiovisual and/or print materials. Publications are listed alphabetically by government agency or department, and…

  14. Music and Its Significance in Children Favourite Audiovisuals

    ERIC Educational Resources Information Center

    Porta, Amparo; Herrera, Lucía

    2017-01-01

    Audiovisual media are part of children's daily life. They build and/or replace a part of the reality that is sometimes preceded. This paper is interested in one of the elements of the audiovisual binomial, the soundtrack, in order to analyse its meaning and sense from the children's point of view. The objectives are: to determine if the…

  15. A measure for assessing the effects of audiovisual speech integration.

    PubMed

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  16. Attentional Factors in Conceptual Congruency

    ERIC Educational Resources Information Center

    Santiago, Julio; Ouellet, Marc; Roman, Antonio; Valenzuela, Javier

    2012-01-01

    Conceptual congruency effects are biases induced by an irrelevant conceptual dimension of a task (e.g., location in vertical space) on the processing of another, relevant dimension (e.g., judging words' emotional evaluation). Such effects are a central empirical pillar for recent views about how the mind/brain represents concepts. In the present…

  17. Moral Values Congruence and Miners' Policy Following Behavior: The Role of Supervisor Morality.

    PubMed

    Lu, Hui; Chen, Hong; Du, Wei; Long, Ruyin

    2017-06-01

    Ethical culture construction is beneficial to maximize policy following behavior (PFB) and avoid accidents of coal miners in an economic downturn. This paper examines the congruence between coal mine ethical culture values (ECVs) and miners' moral values (MVs) and the relationship with PFB. To shed light on this relationship, supervisor moral values (SMVs) act as a key moderator. We build on the initial structure of values to measure ECVs, MVs, and SMVs. At the same time, available congruence was defined to describe the relationship between the two values. Drawing upon a survey of 267 miners in Chinese large state-owned coal mining enterprises, results revealed that ECVs-MVs congruence had a linear relationship with intrinsic PFB (IPFB) and a non-linear relationship with extrinsic PFB. These findings demonstrate that SMVs had a moderating effect on the relationship between ECVs-MVs congruence and extrinsic PFB. Thus, we continued to calculate the available congruence scope in tested enterprises. Furthermore, this study gives relative management proposals and suggestions to improve miners' moral standards and to reduce coal mine accidents.

  18. Bilingualism affects audiovisual phoneme identification

    PubMed Central

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience—i.e., the exposure to a double phonological code during childhood—affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically “deaf” and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech. PMID:25374551

  19. The modified gap excess ratio (GER*) and the stratigraphic congruence of dinosaur phylogenies.

    PubMed

    Wills, Matthew A; Barrett, Paul M; Heathcote, Julia F

    2008-12-01

    Palaeontologists routinely map their cladograms onto what is known of the fossil record. Where sister taxa first appear as fossils at different times, a ghost range is inferred to bridge the gap between these dates. Some measure of the total extent of ghost ranges across the tree underlies several indices of cladistic/stratigraphic congruence. We investigate this congruence for 19 independent, published cladograms of major dinosaur groups and report exceptional agreement between the phylogenetic and stratigraphic patterns, evidenced by sums of ghost ranges near the theoretical minima. This implies that both phylogenetic and stratigraphic data reflect faithfully the evolutionary history of dinosaurs, at least for the taxa included in this study. We formally propose modifications to an existing index of congruence (the gap excess ratio; GER), designed to remove a bias in the range of values possible with trees of different shapes. We also propose a more informative index of congruence--GER*--that takes account of the underlying distribution of sums of ghost ranges possible when permuting stratigraphic range data across the tree. Finally, we incorporate data on the range of possible first occurrence dates into our estimates of congruence, extending a procedure originally implemented with the modified Manhattan stratigraphic measure and GER to our new indices. Most dinosaur data sets maintain extremely high congruence despite such modifications.

  20. A spatially collocated sound thrusts a flash into awareness

    PubMed Central

    Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta

    2015-01-01

    To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126

  1. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  2. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  3. Does value congruence between nurses and supervisors effect job satisfaction and turnover?

    PubMed

    Hunt, Deborah

    2014-07-01

    The purpose of this study was to examine the relationship of congruency of leadership support and value of patient outcomes between nurses and nurse managers and nurses' job satisfaction and turnover intent. Turnover most often has a negative effect on an organization. Leadership support and patient outcomes have been identified as important factors, but congruency has not been studied in great detail. This quantitative non-experimental study included registered nurses (92) and nurse managers (21) in five non-magnet hospitals in the United States. Value congruence on leadership support was correlated with job satisfaction: Satisfaction in Nursing Scale (SINs)-Workload Barriers (r = 0.327, Administrative Support r = 0.544 and Collegiality = 0.920, P < 0.05). Value congruence and leadership support (Leadership Practices Inventory, LPI) was negatively correlated with turnover intent (r = 0.317, P < 0.05). When all variables were combined a correlation of Value of Patient Outcomes (VOPOS) and the Anticipated Turnover Scale (ATS) (r = 0.099, P > 0.05) was noted. Value congruence of leadership support is related to job satisfaction and may be a factor in turnover intent. Nurse Administrators can use these results to develop policies to address the turnover especially in the area of leadership support. © 2013 John Wiley & Sons Ltd.

  4. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    PubMed

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  5. Audiovisual Script Writing.

    ERIC Educational Resources Information Center

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  6. Effect of attentional load on audiovisual speech perception: evidence from ERPs

    PubMed Central

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922

  7. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    PubMed

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  8. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  9. Audio-guided audiovisual data segmentation, indexing, and retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-12-01

    While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.

  10. "Audio-visuel Integre" et Communication(s) ("Integrated Audiovisual" and Communication)

    ERIC Educational Resources Information Center

    Moirand, Sophie

    1974-01-01

    This article examines the usefullness of the audiovisual method in teaching communication competence, and calls for research in audiovisual methods as well as in communication theory for improvement in these areas. (Text is in French.) (AM)

  11. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    PubMed

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  12. Musical expertise is related to altered functional connectivity during audiovisual integration

    PubMed Central

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo

    2015-01-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  13. Semantics, Pragmatics, and the Nature of Semantic Theories

    ERIC Educational Resources Information Center

    Spewak, David Charles, Jr.

    2013-01-01

    The primary concern of this dissertation is determining the distinction between semantics and pragmatics and how context sensitivity should be accommodated within a semantic theory. I approach the question over how to distinguish semantics from pragmatics from a new angle by investigating what the objects of a semantic theory are, namely…

  14. Leader-follower value congruence in social responsibility and ethical satisfaction: a polynomial regression analysis.

    PubMed

    Kang, Seung-Wan; Byun, Gukdo; Park, Hun-Joon

    2014-12-01

    This paper presents empirical research into the relationship between leader-follower value congruence in social responsibility and the level of ethical satisfaction for employees in the workplace. 163 dyads were analyzed, each consisting of a team leader and an employee working at a large manufacturing company in South Korea. Following current methodological recommendations for congruence research, polynomial regression and response surface modeling methodologies were used to determine the effects of value congruence. Results indicate that leader-follower value congruence in social responsibility was positively related to the ethical satisfaction of employees. Furthermore, employees' ethical satisfaction was stronger when aligned with a leader with high social responsibility. The theoretical and practical implications are discussed.

  15. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a...

  16. Audiovisual Media and the Disabled. AV in Action 1.

    ERIC Educational Resources Information Center

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2)…

  17. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less

  18. Evolution of geodesic congruences in a gravitationally collapsing scalar field background

    NASA Astrophysics Data System (ADS)

    Shaikh, Rajibul; Kar, Sayan; DasGupta, Anirvan

    2014-12-01

    The evolution of timelike geodesic congruences in a spherically symmetric, nonstatic, inhomogeneous spacetime representing gravitational collapse of a massless scalar field is studied. We delineate how initial values of the expansion, rotation, and shear of a congruence, as well as the spacetime curvature, influence the global behavior and focusing properties of a family of trajectories. Under specific conditions, the expansion scalar is shown to exhibit a finite jump (from negative to positive value) before focusing eventually occurs. This nonmonotonic behavior of the expansion, observed in our numerical work, is successfully explained through an analysis of the equation for the expansion. Finally, we bring out the role of the metric parameters (related to nonstaticity and spatial inhomogeneity) in shaping the overall behavior of geodesic congruences.

  19. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    PubMed

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  20. Determinants of patient-family caregiver congruence on preferred place of death in taiwan.

    PubMed

    Tang, Siew Tzuh; Chen, Cheryl Chia-Hui; Tang, Woung-Ru; Liu, Tsang-Wu

    2010-08-01

    Patient-family caregiver congruence on preferred place of death not only increases the likelihood of dying at home but also contributes significantly to terminally ill cancer patients' quality of life. To examine the determinants of patient-family caregiver congruence on the preferred place of death in Taiwan. Patient-family caregiver dyads (n=1,108) were surveyed on preferences and needs for end-of-life (EOL) care. Determinants of congruence on preferences were identified by multivariate logistic regression. Patient-caregiver dyads achieved 78.1% agreement on the preferred place of death. The kappa coefficient of congruence was 0.55 (95% confidence interval [CI]=0.50, 0.60). The extent of patient-family caregiver congruence on preferred place of death increased with the patient's higher functional dependence (adjusted odds ratio [AOR] and 95% CI=1.04 [1.02, 1.05]), higher patient-rated importance for dying at preferred place of death (AOR [95% CI]=1.60 [1.43, 1.79]), and having a spousal caregiver (AOR [95% CI]=1.62 [1.14, 2.31]). Other determinants of patient-family caregiver congruence included patient age (AOR [95% CI]=1.01 [1.00, 1.03]), patient-family concordance on preferred EOL care options (AOR=1.68-1.73), patient knowledge of prognosis (AOR [95% CI]=0.68 [0.48, 0.97]), and impact of caregiving on the family caregiver's life (AOR [95% CI]=0.98 [0.96, 0.99]). Increasing patient-family congruence on preferred place of death not only requires knowledge of the patient's prognosis and advance planning by both parties but also depends on family caregivers endorsing patient preferences for EOL care options and ensuring that supporting patients dying at home does not create an intolerable burden for family caregivers. Copyright (c) 2010 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  1. Does dynamic information about the speaker's face contribute to semantic speech processing? ERP evidence.

    PubMed

    Hernández-Gutiérrez, David; Abdel Rahman, Rasha; Martín-Loeches, Manuel; Muñoz, Francisco; Schacht, Annekathrin; Sommer, Werner

    2018-07-01

    Face-to-face interactions characterize communication in social contexts. These situations are typically multimodal, requiring the integration of linguistic auditory input with facial information from the speaker. In particular, eye gaze and visual speech provide the listener with social and linguistic information, respectively. Despite the importance of this context for an ecological study of language, research on audiovisual integration has mainly focused on the phonological level, leaving aside effects on semantic comprehension. Here we used event-related potentials (ERPs) to investigate the influence of facial dynamic information on semantic processing of connected speech. Participants were presented with either a video or a still picture of the speaker, concomitant to auditory sentences. Along three experiments, we manipulated the presence or absence of the speaker's dynamic facial features (mouth and eyes) and compared the amplitudes of the semantic N400 elicited by unexpected words. Contrary to our predictions, the N400 was not modulated by dynamic facial information; therefore, semantic processing seems to be unaffected by the speaker's gaze and visual speech. Even though, during the processing of expected words, dynamic faces elicited a long-lasting late posterior positivity compared to the static condition. This effect was significantly reduced when the mouth of the speaker was covered. Our findings may indicate an increase of attentional processing to richer communicative contexts. The present findings also demonstrate that in natural communicative face-to-face encounters, perceiving the face of a speaker in motion provides supplementary information that is taken into account by the listener, especially when auditory comprehension is non-demanding. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    PubMed

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  3. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  4. The Motive--Strategy Congruence Model Revisited.

    ERIC Educational Resources Information Center

    Watkins, David; Hattie, John

    1992-01-01

    Research with 1,266 Australian secondary school students supports 2 propositions critical to the motive-strategy congruence model of J. B. Biggs (1985). Students tend to use learning strategies congruent with motivation for learning, and congruent motive-strategy combinations are associated with higher average school grades. (SLD)

  5. An Investigation of Person-Environment Congruence

    ERIC Educational Resources Information Center

    McMurray, Marissa Johnstun

    2013-01-01

    This study tested a hypothesis derived from Holland's (1997) theory of personality and environment that congruence between person and environment would influence satisfaction with doctoral training environments and career certainty. Doctoral students' (N = 292) vocational interests were measured using questions from the Interest Item Pool, and…

  6. Disentangling Genuine Semantic Stroop Effects in Reading from Contingency Effects: On the Need for Two Neutral Baselines.

    PubMed

    Lorentz, Eric; McKibben, Tessa; Ekstrand, Chelsea; Gould, Layla; Anton, Kathryn; Borowsky, Ron

    2016-01-01

    The automaticity of reading is often explored through the Stroop effect, whereby color-naming is affected by color words. Color associates (e.g., "sky") also produce a Stroop effect, suggesting that automatic reading occurs through to the level of semantics, even when reading sub-lexically (e.g., the pseudohomophone "skigh"). However, several previous experiments have confounded congruency with contingency learning, whereby faster responding occurs for more frequent stimuli. Contingency effects reflect a higher frequency-pairing of the word with a font color in the congruent condition than in the incongruent condition due to the limited set of congruent pairings. To determine the extent to which the Stroop effect can be attributed to contingency learning of font colors paired with lexical (word-level) and sub-lexical (phonetically decoded) letter strings, as well as assess facilitation and interference relative to contingency effects, we developed two neutral baselines: each one matched on pair-frequency for congruent and incongruent color words. In Experiments 1 and 3, color words (e.g., "blue") and their pseudohomophones (e.g., "bloo") produced significant facilitation and interference relative to neutral baselines, regardless of whether the onset (i.e., first phoneme) was matched to the color words. Color associates (e.g., "ocean") and their pseudohomophones (e.g., "oshin"), however, showed no significant facilitation or interference relative to onset matched neutral baselines (Experiment 2). When onsets were unmatched, color associate words produced consistent facilitation on RT (e.g., "ocean" vs. "dozen"), but pseudohomophones (e.g., "oshin" vs. "duhzen") failed to produce facilitation or interference. Our findings suggest that the Stroop effects for color and associated stimuli are sensitive to the type of neutral baseline used, as well as stimulus type (word vs. pseudohomophone). In general, contingency learning plays a large role when repeating congruent

  7. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  8. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  9. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  10. Enhancing Congruence between Implicit Motives and Explicit Goal Commitments: Results of a Randomized Controlled Trial.

    PubMed

    Roch, Ramona M; Rösch, Andreas G; Schultheiss, Oliver C

    2017-01-01

    Objective: Theory and research suggest that the pursuit of personal goals that do not fit a person's affect-based implicit motives results in impaired emotional well-being, including increased symptoms of depression. The aim of this study was to evaluate an intervention designed to enhance motive-goal congruence and study its impact on well-being. Method: Seventy-four German students (mean age = 22.91, SD = 3.68; 64.9% female) without current psychopathology, randomly allocated to three groups: motivational feedback (FB; n = 25; participants learned about the fit between their implicit motives and explicit goals), FB + congruence-enhancement training (CET; n = 22; participants also engaged in exercises to increase the fit between their implicit motives and goals), and a no-intervention control group ( n = 27), were administered measures of implicit motives, personal goal commitments, happiness, depressive symptoms, and life satisfaction 3 weeks before (T1) and 6 weeks after (T2) treatment. Results: On two types of congruence measures derived from motive and goal assessments, treated participants showed increases in agentic (power and achievement) congruence, with improvements being most consistent in the FB+CET group. Treated participants also showed a trend-level depressive symptom reduction, but no changes on other well-being measures. Although increases in overall and agentic motivational congruence were associated with increases in affective well-being, treatment-based reduction of depressive symptoms was not mediated by treatment-based agentic congruence changes. Conclusion: These findings document that motivational congruence can be effectively enhanced, that changes in motivational congruence are associated with changes in affective well-being, and they suggest that individuals' implicit motives should be considered when personal goals are discussed in the therapeutic process.

  11. Enhancing Congruence between Implicit Motives and Explicit Goal Commitments: Results of a Randomized Controlled Trial

    PubMed Central

    Roch, Ramona M.; Rösch, Andreas G.; Schultheiss, Oliver C.

    2017-01-01

    Objective: Theory and research suggest that the pursuit of personal goals that do not fit a person's affect-based implicit motives results in impaired emotional well-being, including increased symptoms of depression. The aim of this study was to evaluate an intervention designed to enhance motive-goal congruence and study its impact on well-being. Method: Seventy-four German students (mean age = 22.91, SD = 3.68; 64.9% female) without current psychopathology, randomly allocated to three groups: motivational feedback (FB; n = 25; participants learned about the fit between their implicit motives and explicit goals), FB + congruence-enhancement training (CET; n = 22; participants also engaged in exercises to increase the fit between their implicit motives and goals), and a no-intervention control group (n = 27), were administered measures of implicit motives, personal goal commitments, happiness, depressive symptoms, and life satisfaction 3 weeks before (T1) and 6 weeks after (T2) treatment. Results: On two types of congruence measures derived from motive and goal assessments, treated participants showed increases in agentic (power and achievement) congruence, with improvements being most consistent in the FB+CET group. Treated participants also showed a trend-level depressive symptom reduction, but no changes on other well-being measures. Although increases in overall and agentic motivational congruence were associated with increases in affective well-being, treatment-based reduction of depressive symptoms was not mediated by treatment-based agentic congruence changes. Conclusion: These findings document that motivational congruence can be effectively enhanced, that changes in motivational congruence are associated with changes in affective well-being, and they suggest that individuals' implicit motives should be considered when personal goals are discussed in the therapeutic process. PMID:28955267

  12. Congruence between Students' and Teachers' Goals: Implications for Social and Academic Motivation

    ERIC Educational Resources Information Center

    Spera, Christopher; Wentzel, Kathryn R.

    2003-01-01

    This study examined student-teacher goal congruence and its relation to social and academic motivation. Based on a sample of 97 ninth-graders, high levels of goal congruence for each of the four goals measured (prosocial, responsibility, learning, performance) was positively related to student interest in class and perceived social support from…

  13. Neural correlates of audiovisual integration in music reading.

    PubMed

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Neuromagnetic brain activities associated with perceptual categorization and sound-content incongruency: a comparison between monosyllabic words and pitch names

    PubMed Central

    Tsai, Chen-Gia; Chen, Chien-Chung; Wen, Ya-Chien; Chou, Tai-Li

    2015-01-01

    In human cultures, the perceptual categorization of musical pitches relies on pitch-naming systems. A sung pitch name concurrently holds the information of fundamental frequency and pitch name. These two aspects may be either congruent or incongruent with regard to pitch categorization. The present study aimed to compare the neuromagnetic responses to musical and verbal stimuli for congruency judgments, for example a congruent pair for the pitch C4 sung with the pitch name do in a C-major context (the pitch-semantic task) or for the meaning of a word to match the speaker’s identity (the voice-semantic task). Both the behavioral data and neuromagnetic data showed that congruency detection of the speaker’s identity and word meaning was slower than that of the pitch and pitch name. Congruency effects of musical stimuli revealed that pitch categorization and semantic processing of pitch information were associated with P2m and N400m, respectively. For verbal stimuli, P2m and N400m did not show any congruency effect. In both the pitch-semantic task and the voice-semantic task, we found that incongruent stimuli evoked stronger slow waves with the latency of 500–600 ms than congruent stimuli. These findings shed new light on the neural mechanisms underlying pitch-naming processes. PMID:26347638

  15. Bibliographic control of audiovisuals: analysis of a cataloging project using OCLC.

    PubMed

    Curtis, J A; Davison, F M

    1985-04-01

    The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal.

  16. Bibliographic control of audiovisuals: analysis of a cataloging project using OCLC.

    PubMed Central

    Curtis, J A; Davison, F M

    1985-01-01

    The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal. PMID:2581645

  17. Atypical rapid audio-visual temporal recalibration in autism spectrum disorders.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew A; Stevenson, Ryan; Alais, David; Wallace, Mark T

    2017-01-01

    Changes in sensory and multisensory function are increasingly recognized as a common phenotypic characteristic of Autism Spectrum Disorders (ASD). Furthermore, much recent evidence suggests that sensory disturbances likely play an important role in contributing to social communication weaknesses-one of the core diagnostic features of ASD. An established sensory disturbance observed in ASD is reduced audiovisual temporal acuity. In the current study, we substantially extend these explorations of multisensory temporal function within the framework that an inability to rapidly recalibrate to changes in audiovisual temporal relations may play an important and under-recognized role in ASD. In the paradigm, we present ASD and typically developing (TD) children and adolescents with asynchronous audiovisual stimuli of varying levels of complexity and ask them to perform a simultaneity judgment (SJ). In the critical analysis, we test audiovisual temporal processing on trial t as a condition of trial t - 1. The results demonstrate that individuals with ASD fail to rapidly recalibrate to audiovisual asynchronies in an equivalent manner to their TD counterparts for simple and non-linguistic stimuli (i.e., flashes and beeps, hand-held tools), but exhibit comparable rapid recalibration for speech stimuli. These results are discussed in terms of prior work showing a speech-specific deficit in audiovisual temporal function in ASD, and in light of current theories of autism focusing on sensory noise and stability of perceptual representations. Autism Res 2017, 10: 121-129. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  18. Selected Mental Health Audiovisuals.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  19. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    ERIC Educational Resources Information Center

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  20. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  1. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection

    PubMed Central

    Ren, Yudan

    2018-01-01

    Abstract We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection. PMID:29354682

  2. The production of audiovisual teaching tools in minimally invasive surgery.

    PubMed

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  4. Beta-Band Functional Connectivity Influences Audiovisual Integration in Older Age: An EEG Study

    PubMed Central

    Wang, Luyao; Wang, Wenhui; Yan, Tianyi; Song, Jiayong; Yang, Weiping; Wang, Bin; Go, Ritsu; Huang, Qiang; Wu, Jinglong

    2017-01-01

    Audiovisual integration occurs frequently and has been shown to exhibit age-related differences via behavior experiments or time-frequency analyses. In the present study, we examined whether functional connectivity influences audiovisual integration during normal aging. Visual, auditory, and audiovisual stimuli were randomly presented peripherally; during this time, participants were asked to respond immediately to the target stimulus. Electroencephalography recordings captured visual, auditory, and audiovisual processing in 12 old (60–78 years) and 12 young (22–28 years) male adults. For non-target stimuli, we focused on alpha (8–13 Hz), beta (13–30 Hz), and gamma (30–50 Hz) bands. We applied the Phase Lag Index to study the dynamics of functional connectivity. Then, the network topology parameters, which included the clustering coefficient, path length, small-worldness global efficiency, local efficiency and degree, were calculated for each condition. For the target stimulus, a race model was used to analyze the response time. Then, a Pearson correlation was used to test the relationship between each network topology parameters and response time. The results showed that old adults activated stronger connections during audiovisual processing in the beta band. The relationship between network topology parameters and the performance of audiovisual integration was detected only in old adults. Thus, we concluded that old adults who have a higher load during audiovisual integration need more cognitive resources. Furthermore, increased beta band functional connectivity influences the performance of audiovisual integration during normal aging. PMID:28824411

  5. Beta-Band Functional Connectivity Influences Audiovisual Integration in Older Age: An EEG Study.

    PubMed

    Wang, Luyao; Wang, Wenhui; Yan, Tianyi; Song, Jiayong; Yang, Weiping; Wang, Bin; Go, Ritsu; Huang, Qiang; Wu, Jinglong

    2017-01-01

    Audiovisual integration occurs frequently and has been shown to exhibit age-related differences via behavior experiments or time-frequency analyses. In the present study, we examined whether functional connectivity influences audiovisual integration during normal aging. Visual, auditory, and audiovisual stimuli were randomly presented peripherally; during this time, participants were asked to respond immediately to the target stimulus. Electroencephalography recordings captured visual, auditory, and audiovisual processing in 12 old (60-78 years) and 12 young (22-28 years) male adults. For non-target stimuli, we focused on alpha (8-13 Hz), beta (13-30 Hz), and gamma (30-50 Hz) bands. We applied the Phase Lag Index to study the dynamics of functional connectivity. Then, the network topology parameters, which included the clustering coefficient, path length, small-worldness global efficiency, local efficiency and degree, were calculated for each condition. For the target stimulus, a race model was used to analyze the response time. Then, a Pearson correlation was used to test the relationship between each network topology parameters and response time. The results showed that old adults activated stronger connections during audiovisual processing in the beta band. The relationship between network topology parameters and the performance of audiovisual integration was detected only in old adults. Thus, we concluded that old adults who have a higher load during audiovisual integration need more cognitive resources. Furthermore, increased beta band functional connectivity influences the performance of audiovisual integration during normal aging.

  6. From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.

    ERIC Educational Resources Information Center

    Anderson, Peter

    1993-01-01

    The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)

  7. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    PubMed

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  9. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or...

  10. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or...

  11. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related...

  12. A teaching bank of audiovisual materials for family practice.

    PubMed

    Geyman, J P; Brown, T C

    1975-10-01

    Although increasing emphasis has been placed in recent years on the production and use of audiovisual materials in medical education, little work has yet been done on the identification and application of these materials in family practice teaching programs. This paper describes the content, uses, limitations, and initial experience of a Teaching Bank developed to support family practice teaching in varied settings. Video cassette and tape-slide units are most useful; audio cassettes alone are less likely to be selected. The evaluation of content, quality, and effectiveness of audiovisual media poses a particular problem. Although audiovisual materials can enhance learning based on different individual learning needs and styles, they cannot stand alone and usually must be supplemented by other teaching methods.

  13. Library Educators' Awareness and Evaluation of National Audiovisual Center Materials.

    ERIC Educational Resources Information Center

    Palmer, Joseph W.

    1980-01-01

    Describes a survey of 18 library schools conducted to determine if faculty are familiar with audiovisual materials available from the National Audiovisual Center, and how these materials are rated in quality. Results indicate that there is a need for more descriptive and evaluative information to reach library educators. (BK)

  14. Audiovisual Processing in Children with and without Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Mongillo, Elizabeth A.; Irwin, Julia R.; Whalen, D. H.; Klaiman, Cheryl; Carter, Alice S.; Schultz, Robert T.

    2008-01-01

    Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces…

  15. [Audio-visual communication in the history of psychiatry].

    PubMed

    Farina, B; Remoli, V; Russo, F

    1993-12-01

    The authors analyse the evolution of visual communication in the history of psychiatry. From the 18th century oil paintings to the first dagherrotic prints until the cinematography and the modern audiovisual systems they observed an increasing diffusion of the new communication techniques in psychiatry, and described the use of the different techniques in psychiatric practice. The article ends with a brief review of the current applications of the audiovisual in therapy, training, teaching, and research.

  16. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-837] Certain Audiovisual Components and Products... importation of certain audiovisual components and products containing the same by reason of infringement of... importation, or the sale within the United States after importation of certain audiovisual components and...

  17. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual...

  18. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    NASA Astrophysics Data System (ADS)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  19. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  20. Cortical Integration of Audio-Visual Information

    PubMed Central

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  1. Coherence and congruence: two aspects of personality integration.

    PubMed

    Sheldon, K M; Kasser, T

    1995-03-01

    Coherence and congruence-based measures of personality integration were related to a variety of healthy personality characteristics. Functional coherence was defined as occurring when participants' "personal strivings" (R.A. Emmons, 1986) help bring about each other or help bring about higher level goals. Organismic congruence was defined as occurring when participants strive for self-determined reasons or when strivings help bring about intrinsic rather than extrinsic higher level goals. Study 1 found the integration measures were related to each other and to inventory measures of health and well-being. Study 2 showed that these goal integration measures were also related to role system integration and were prospective predictors of daily mood, vitality, and engagement in meaningful as opposed to distracting activities.

  2. Congruence between gender stereotypes and activity preference in self-identified tomboys and non-tomboys.

    PubMed

    Martin, Carol Lynn; Dinella, Lisa M

    2012-06-01

    The major goal was to examine a central tenet of cognitive approaches to gender development, namely, that congruence exists between personal gender stereotypes and behaviors. Item-by-item comparisons of girls' stereotypes about activities and their preferences for activities were conducted, for both girls who claimed to be tomboys and those who did not. Congruence was expected for all girls, but because of their gender non-normative interests, tomboys may exhibit less congruence. A secondary goal was to examine factors that might influence congruence, specifically, whether tomboys develop more inclusive stereotypes and develop greater understanding of stereotype variability. Participants included 112 girls (7-12 years old, M age=9). Girls were interviewed about their activity preferences, beliefs about girls' and boys' activity preferences, understanding variability of stereotypes, and identification as tomboys. Tomboys (30% of the sample) and non-tomboys did not differ in their liking of or in the number of liked feminine activities. However, tomboys showed more interest in masculine activities than non-tomboys. Tomboys and non-tomboys did not differ in stereotype inclusiveness, although tomboys showed a trend toward more inclusive stereotypes. Both groups showed high levels of congruence between stereotypes and preferences. Congruence was stronger for nontomboys (14 times more likely to exhibit responses congruent with stereotypes vs. incongruent ones), as compared to tomboys who were four times more likely to exhibit responses congruent with stereotypes versus incongruent ones. Implications of these findings for cognitive approaches to gender development are discussed.

  3. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    PubMed

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  4. Dissociable Effects of Aging and Mild Cognitive Impairment on Bottom-Up Audiovisual Integration.

    PubMed

    Festa, Elena K; Katz, Andrew P; Ott, Brian R; Tremont, Geoffrey; Heindel, William C

    2017-01-01

    Effective audiovisual sensory integration involves dynamic changes in functional connectivity between superior temporal sulcus and primary sensory areas. This study examined whether disrupted connectivity in early Alzheimer's disease (AD) produces impaired audiovisual integration under conditions requiring greater corticocortical interactions. Audiovisual speech integration was examined in healthy young adult controls (YC), healthy elderly controls (EC), and patients with amnestic mild cognitive impairment (MCI) using McGurk-type stimuli (providing either congruent or incongruent audiovisual speech information) under conditions differing in the strength of bottom-up support and the degree of top-down lexical asymmetry. All groups accurately identified auditory speech under congruent audiovisual conditions, and displayed high levels of visual bias under strong bottom-up incongruent conditions. Under weak bottom-up incongruent conditions, however, EC and amnestic MCI groups displayed opposite patterns of performance, with enhanced visual bias in the EC group and reduced visual bias in the MCI group relative to the YC group. Moreover, there was no overlap between the EC and MCI groups in individual visual bias scores reflecting the change in audiovisual integration from the strong to the weak stimulus conditions. Top-down lexicality influences on visual biasing were observed only in the MCI patients under weaker bottom-up conditions. Results support a deficit in bottom-up audiovisual integration in early AD attributable to disruptions in corticocortical connectivity. Given that this deficit is not simply an exacerbation of changes associated with healthy aging, tests of audiovisual speech integration may serve as sensitive and specific markers of the earliest cognitive change associated with AD.

  5. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-08-01

    Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.

  6. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    ERIC Educational Resources Information Center

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  7. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  8. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  9. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  10. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  11. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    PubMed

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  12. Context-specific effects of musical expertise on audiovisual integration

    PubMed Central

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  13. Effects of interest-major congruence, motivation, and academic performance on timely degree attainment.

    PubMed

    Allen, Jeff; Robbins, Steve

    2010-01-01

    Using longitudinal student data from 15 four-year (n = 3,072) and 13 (n = 788) two-year postsecondary institutions, the authors tested the effects of interest-major congruence, motivation, and 1st-year academic performance on timely degree completion. Findings suggest that interest-major congruence has a direct effect on timely degree completion at both institutional settings and that motivation has indirect effects (via 1st-year academic performance). The total effects of both interest-major congruence and motivation on timely degree completion underscore the importance of both constructs in understanding student adjustment and postsecondary success. Implications for theory and counseling practice are discussed.

  14. Semantator: semantic annotator for converting biomedical text to linked data.

    PubMed

    Tao, Cui; Song, Dezhao; Sharma, Deepak; Chute, Christopher G

    2013-10-01

    More than 80% of biomedical data is embedded in plain text. The unstructured nature of these text-based documents makes it challenging to easily browse and query the data of interest in them. One approach to facilitate browsing and querying biomedical text is to convert the plain text to a linked web of data, i.e., converting data originally in free text to structured formats with defined meta-level semantics. In this paper, we introduce Semantator (Semantic Annotator), a semantic-web-based environment for annotating data of interest in biomedical documents, browsing and querying the annotated data, and interactively refining annotation results if needed. Through Semantator, information of interest can be either annotated manually or semi-automatically using plug-in information extraction tools. The annotated results will be stored in RDF and can be queried using the SPARQL query language. In addition, semantic reasoners can be directly applied to the annotated data for consistency checking and knowledge inference. Semantator has been released online and was used by the biomedical ontology community who provided positive feedbacks. Our evaluation results indicated that (1) Semantator can perform the annotation functionalities as designed; (2) Semantator can be adopted in real applications in clinical and transactional research; and (3) the annotated results using Semantator can be easily used in Semantic-web-based reasoning tools for further inference. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Young children's recall and reconstruction of audio and audiovisual narratives.

    PubMed

    Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C

    1986-08-01

    It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.

  16. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis

    PubMed Central

    Altieri, Nicholas; Wenger, Michael J.

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. PMID:24058358

  17. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    PubMed

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  18. Are we on the same page? The performance effects of congruence between supervisor and group trust.

    PubMed

    Carter, Min Z; Mossholder, Kevin W

    2015-09-01

    Taking a multiple-stakeholder perspective, we examined the effects of supervisor-work group trust congruence on groups' task and contextual performance using a polynomial regression and response surface analytical framework. We expected motivation experienced by work groups to mediate the positive influence of trust congruence on performance. Although hypothesized congruence effects on performance were more strongly supported for affective rather than for cognitive trust, we found significant indirect effects on performance (via work group motivation) for both types of trust. We discuss the performance effects of trust congruence and incongruence between supervisors and work groups, as well as implications for practice and future research. (c) 2015 APA, all rights reserved).

  19. Functional brain and age-related changes associated with congruency in task switching

    PubMed Central

    Eich, Teal S.; Parker, David; Liu, Dan; Oh, Hwamee; Razlighi, Qolamreza; Gazes, Yunglin; Habeck, Christian; Stern, Yaakov

    2016-01-01

    Alternating between completing two simple tasks, as opposed to completing only one task, has been shown to produce costs to performance and changes to neural patterns of activity, effects which are augmented in old age. Cognitive conflict may arise from factors other than switching tasks, however. Sensorimotor congruency (whether stimulus-response mappings are the same or different for the two tasks) has been shown to behaviorally moderate switch costs in older, but not younger adults. In the current study, we used fMRI to investigate the neurobiological mechanisms of response-conflict congruency effects within a task switching paradigm in older (N=75) and younger (N=62) adults. Behaviorally, incongruency moderated age-related differences in switch costs. Neurally, switch costs were associated with greater activation in the dorsal attention network for older relative to younger adults. We also found that older adults recruited an additional set of brain areas in the ventral attention network to a greater extent than did younger adults to resolve congruency-related response-conflict. These results suggest both a network and an age-based dissociation between congruency and switch costs in task switching. PMID:27520472

  20. Automated x-ray/light field congruence using the LINAC EPID panel.

    PubMed

    Polak, Wojciech; O'Doherty, Jim; Jones, Matt

    2013-03-01

    X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal

  1. Effects of Interest-Major Congruence, Motivation, and Academic Performance on Timely Degree Attainment

    ERIC Educational Resources Information Center

    Allen, Jeff; Robbins, Steve

    2010-01-01

    Using longitudinal student data from 15 four-year (n = 3,072) and 13 (n = 788) two-year postsecondary institutions, the authors tested the effects of interest-major congruence, motivation, and 1st-year academic performance on timely degree completion. Findings suggest that interest-major congruence has a direct effect on timely degree completion…

  2. Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech.

    PubMed

    Broderick, Michael P; Anderson, Andrew J; Di Liberto, Giovanni M; Crosse, Michael J; Lalor, Edmund C

    2018-03-05

    People routinely hear and understand speech at rates of 120-200 words per minute [1, 2]. Thus, speech comprehension must involve rapid, online neural mechanisms that process words' meanings in an approximately time-locked fashion. However, electrophysiological evidence for such time-locked processing has been lacking for continuous speech. Although valuable insights into semantic processing have been provided by the "N400 component" of the event-related potential [3-6], this literature has been dominated by paradigms using incongruous words within specially constructed sentences, with less emphasis on natural, narrative speech comprehension. Building on the discovery that cortical activity "tracks" the dynamics of running speech [7-9] and psycholinguistic work demonstrating [10-12] and modeling [13-15] how context impacts on word processing, we describe a new approach for deriving an electrophysiological correlate of natural speech comprehension. We used a computational model [16] to quantify the meaning carried by words based on how semantically dissimilar they were to their preceding context and then regressed this measure against electroencephalographic (EEG) data recorded from subjects as they listened to narrative speech. This produced a prominent negativity at a time lag of 200-600 ms on centro-parietal EEG channels, characteristics common to the N400. Applying this approach to EEG datasets involving time-reversed speech, cocktail party attention, and audiovisual speech-in-noise demonstrated that this response was very sensitive to whether or not subjects understood the speech they heard. These findings demonstrate that, when successfully comprehending natural speech, the human brain responds to the contextual semantic content of each word in a relatively time-locked fashion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. The process of developing audiovisual patient information: challenges and opportunities.

    PubMed

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  4. Audiovisual speech facilitates voice learning.

    PubMed

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  5. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  6. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    PubMed

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  8. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  9. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal.

    PubMed

    Sun, Kang; Echevarria Sanchez, Gemma M; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.

  10. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal

    PubMed Central

    Sun, Kang; Echevarria Sanchez, Gemma M.; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment. PMID:29910750

  11. Lexical-semantic processing in the semantic priming paradigm in aphasic patients.

    PubMed

    Salles, Jerusa Fumagalli de; Holderbaum, Candice Steffen; Parente, Maria Alice Mattos Pimenta; Mansur, Letícia Lessa; Ansaldo, Ana Inès

    2012-09-01

    There is evidence that the explicit lexical-semantic processing deficits which characterize aphasia may be observed in the absence of implicit semantic impairment. The aim of this article was to critically review the international literature on lexical-semantic processing in aphasia, as tested through the semantic priming paradigm. Specifically, this review focused on aphasia and lexical-semantic processing, the methodological strengths and weaknesses of the semantic paradigms used, and recent evidence from neuroimaging studies on lexical-semantic processing. Furthermore, evidence on dissociations between implicit and explicit lexical-semantic processing reported in the literature will be discussed and interpreted by referring to functional neuroimaging evidence from healthy populations. There is evidence that semantic priming effects can be found both in fluent and in non-fluent aphasias, and that these effects are related to an extensive network which includes the temporal lobe, the pre-frontal cortex, the left frontal gyrus, the left temporal gyrus and the cingulated cortex.

  12. Temporal Processing of Audiovisual Stimuli Is Enhanced in Musicians: Evidence from Magnetoencephalography (MEG)

    PubMed Central

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014

  13. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    PubMed

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  14. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention.

  15. The World of Audiovisual Education: Its Impact on Libraries and Librarians.

    ERIC Educational Resources Information Center

    Ely, Donald P.

    As the field of educational technology developed, the field of library science became increasingly concerned about audiovisual media. School libraries have made significant developments in integrating audiovisual media into traditional programs, and are becoming learning resource centers with a variety of media; academic and public libraries are…

  16. Distribution, congruence, and hotspots of higher plants in China

    PubMed Central

    Zhao, Lina; Li, Jinya; Liu, Huiyuan; Qin, Haining

    2016-01-01

    Identifying biodiversity hotspots has become a central issue in setting up priority protection areas, especially as financial resources for biological diversity conservation are limited. Taking China’s Higher Plants Red List (CHPRL), including Bryophytes, Ferns, Gymnosperms, Angiosperms, as the data source, we analyzed the geographic patterns of species richness, endemism, and endangerment via data processing at a fine grid-scale with an average edge length of 30 km based on three aspects of richness information: species richness, endemic species richness, and threatened species richness. We sought to test the accuracy of hotspots used in identifying conservation priorities with regard to higher plants. Next, we tested the congruence of the three aspects and made a comparison of the similarities and differences between the hotspots described in this paper and those in previous studies. We found that over 90% of threatened species in China are concentrated. While a high spatial congruence is observed among the three measures, there is a low congruence between two different sets of hotspots. Our results suggest that biodiversity information should be considered when identifying biological hotspots. Other factors, such as scales, should be included as well to develop biodiversity conservation plans in accordance with the region’s specific conditions. PMID:26750244

  17. Distribution, congruence, and hotspots of higher plants in China.

    PubMed

    Zhao, Lina; Li, Jinya; Liu, Huiyuan; Qin, Haining

    2016-01-11

    Identifying biodiversity hotspots has become a central issue in setting up priority protection areas, especially as financial resources for biological diversity conservation are limited. Taking China's Higher Plants Red List (CHPRL), including Bryophytes, Ferns, Gymnosperms, Angiosperms, as the data source, we analyzed the geographic patterns of species richness, endemism, and endangerment via data processing at a fine grid-scale with an average edge length of 30 km based on three aspects of richness information: species richness, endemic species richness, and threatened species richness. We sought to test the accuracy of hotspots used in identifying conservation priorities with regard to higher plants. Next, we tested the congruence of the three aspects and made a comparison of the similarities and differences between the hotspots described in this paper and those in previous studies. We found that over 90% of threatened species in China are concentrated. While a high spatial congruence is observed among the three measures, there is a low congruence between two different sets of hotspots. Our results suggest that biodiversity information should be considered when identifying biological hotspots. Other factors, such as scales, should be included as well to develop biodiversity conservation plans in accordance with the region's specific conditions.

  18. Culture, salience, and psychiatric diagnosis: exploring the concept of cultural congruence & its practical application

    PubMed Central

    2013-01-01

    Introduction Cultural congruence is the idea that to the extent a belief or experience is culturally shared it is not to feature in a diagnostic judgement, irrespective of its resemblance to psychiatric pathology. This rests on the argument that since deviation from norms is central to diagnosis, and since what counts as deviation is relative to context, assessing the degree of fit between mental states and cultural norms is crucial. Various problems beset the cultural congruence construct including impoverished definitions of culture as religious, national or ethnic group and of congruence as validation by that group. This article attempts to address these shortcomings to arrive at a cogent construct. Results The article distinguishes symbolic from phenomenological conceptions of culture, the latter expanded upon through two sources: Husserl’s phenomenological analysis of background intentionality and neuropsychological literature on salience. It is argued that culture is not limited to symbolic presuppositions and shapes subjects’ experiential dispositions. This conception is deployed to re-examine the meaning of (in)congruence. The main argument is that a significant, since foundational, deviation from culture is not from a value or belief but from culturally-instilled experiential dispositions, in what is salient to an individual in a particular context. Conclusion Applying the concept of cultural congruence must not be limited to assessing violations of the symbolic order and must consider alignment with or deviations from culturally-instilled experiential dispositions. By virtue of being foundational to a shared experience of the world, such dispositions are more accurate indicators of potential vulnerability. Notwithstanding problems of access and expertise, clinical practice should aim to accommodate this richer meaning of cultural congruence. PMID:23870676

  19. Culture, salience, and psychiatric diagnosis: exploring the concept of cultural congruence & its practical application.

    PubMed

    Rashed, Mohammed Abouelleil

    2013-07-16

    Cultural congruence is the idea that to the extent a belief or experience is culturally shared it is not to feature in a diagnostic judgement, irrespective of its resemblance to psychiatric pathology. This rests on the argument that since deviation from norms is central to diagnosis, and since what counts as deviation is relative to context, assessing the degree of fit between mental states and cultural norms is crucial. Various problems beset the cultural congruence construct including impoverished definitions of culture as religious, national or ethnic group and of congruence as validation by that group. This article attempts to address these shortcomings to arrive at a cogent construct. The article distinguishes symbolic from phenomenological conceptions of culture, the latter expanded upon through two sources: Husserl's phenomenological analysis of background intentionality and neuropsychological literature on salience. It is argued that culture is not limited to symbolic presuppositions and shapes subjects' experiential dispositions. This conception is deployed to re-examine the meaning of (in)congruence. The main argument is that a significant, since foundational, deviation from culture is not from a value or belief but from culturally-instilled experiential dispositions, in what is salient to an individual in a particular context. Applying the concept of cultural congruence must not be limited to assessing violations of the symbolic order and must consider alignment with or deviations from culturally-instilled experiential dispositions. By virtue of being foundational to a shared experience of the world, such dispositions are more accurate indicators of potential vulnerability. Notwithstanding problems of access and expertise, clinical practice should aim to accommodate this richer meaning of cultural congruence.

  20. Congruence or discrepancy? Comparing patients' health valuations and physicians' treatment goals for rehabilitation for patients with chronic conditions.

    PubMed

    Nagl, Michaela; Farin, Erik

    2012-03-01

    The aim of this study was to test the congruence of patients' health valuations and physicians' treatment goals for the rehabilitation of chronically ill patients. In addition, patient characteristics associated with greater or less congruence were to be determined. In a questionnaire study, patients' health valuations and physicians' goals were assessed in three chronic conditions [breast cancer (BC), chronic ischemic heart disease (CIHD), and chronic back pain (CBP)] using a ranking method. Sociodemographic variables and health-related quality of life were assessed as patient-related factors that influence congruence. Congruence was determined at the group (Spearman's ρ) and individual levels (percentage of congruence). Patient-related influencing factors were calculated after a simple imputation using multiple logistic regression analysis. At the group level, there were often only low correlations. The mean percentage of congruence was 34.7% (BC), 48.5% (CIHD), and 31.9% (CBP). Patients with BC or CIHD who have a higher level of education showed greater congruence. Our results indicate some high discrepancy rates between physicians' treatment goals and patients' health valuations. It is possible that patients have preferences that do not correspond well with realistic rehabilitation goals or that physicians do not take patients' individual health valuations sufficiently into consideration when setting goals.

  1. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    PubMed

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  2. Varieties of semantic 'access' deficit in Wernicke's aphasia and semantic aphasia.

    PubMed

    Thompson, Hannah E; Robson, Holly; Lambon Ralph, Matthew A; Jefferies, Elizabeth

    2015-12-01

    Comprehension deficits are common in stroke aphasia, including in cases with (i) semantic aphasia, characterized by poor executive control of semantic processing across verbal and non-verbal modalities; and (ii) Wernicke's aphasia, associated with poor auditory-verbal comprehension and repetition, plus fluent speech with jargon. However, the varieties of these comprehension problems, and their underlying causes, are not well understood. Both patient groups exhibit some type of semantic 'access' deficit, as opposed to the 'storage' deficits observed in semantic dementia. Nevertheless, existing descriptions suggest that these patients might have different varieties of 'access' impairment-related to difficulty resolving competition (in semantic aphasia) versus initial activation of concepts from sensory inputs (in Wernicke's aphasia). We used a case series design to compare patients with Wernicke's aphasia and those with semantic aphasia on Warrington's paradigmatic assessment of semantic 'access' deficits. In these verbal and non-verbal matching tasks, a small set of semantically-related items are repeatedly presented over several cycles so that the target on one trial becomes a distractor on another (building up interference and eliciting semantic 'blocking' effects). Patients with Wernicke's aphasia and semantic aphasia were distinguished according to lesion location in the temporal cortex, but in each group, some individuals had additional prefrontal damage. Both of these aspects of lesion variability-one that mapped onto classical 'syndromes' and one that did not-predicted aspects of the semantic 'access' deficit. Both semantic aphasia and Wernicke's aphasia cases showed multimodal semantic impairment, although as expected, the Wernicke's aphasia group showed greater deficits on auditory-verbal than picture judgements. Distribution of damage in the temporal lobe was crucial for predicting the initially 'beneficial' effects of stimulus repetition: cases with

  3. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    PubMed

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  4. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  5. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation

    PubMed Central

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can

  6. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    PubMed

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  7. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    PubMed

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual

  8. Audiovisual Temporal Processing and Synchrony Perception in the Rat

    PubMed Central

    Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.

    2017-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately

  9. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    PubMed

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  10. A pilot study of audiovisual family meetings in the intensive care unit.

    PubMed

    de Havenon, Adam; Petersen, Casey; Tanana, Michael; Wold, Jana; Hoesch, Robert

    2015-10-01

    We hypothesized that virtual family meetings in the intensive care unit with conference calling or Skype videoconferencing would result in increased family member satisfaction and more efficient decision making. This is a prospective, nonblinded, nonrandomized pilot study. A 6-question survey was completed by family members after family meetings, some of which used conference calling or Skype by choice. Overall, 29 (33%) of the completed surveys came from audiovisual family meetings vs 59 (67%) from control meetings. The survey data were analyzed using hierarchical linear modeling, which did not find any significant group differences between satisfaction with the audiovisual meetings vs controls. There was no association between the audiovisual intervention and withdrawal of care (P = .682) or overall hospital length of stay (z = 0.885, P = .376). Although we do not report benefit from an audiovisual intervention, these results are preliminary and heavily influenced by notable limitations to the study. Given that the intervention was feasible in this pilot study, audiovisual and social media intervention strategies warrant additional investigation given their unique ability to facilitate communication among family members in the intensive care unit. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Emotional Congruence With Children Is Associated With Sexual Deviancy in Sexual Offenders Against Children.

    PubMed

    Hermann, Chantal A; McPhail, Ian V; Helmus, L Maaike; Hanson, R Karl

    2017-09-01

    Emotional congruence with children is a psychologically meaningful risk factor for sexual offending against children. The present study examines the correlates of emotional congruence with children in a sample of 424 adult male sexual offenders who started a period of community supervision in Canada, Alaska, and Iowa between 2001 and 2005. Consistent with previous work, we found sexual offenders against children high in emotional congruence with children were more likely to be sexually deviant, have poor sexual self-regulation, experience social loneliness, and have more distorted cognitions about sex with children. Overall, our findings are most consistent with a sexual deviancy model, with some support for a blockage model.

  12. A Joint Investigation of Semantic Facilitation and Semantic Interference in Continuous Naming

    ERIC Educational Resources Information Center

    Scaltritti, Michele; Peressotti, Francesca; Navarrete, Eduardo

    2017-01-01

    When speakers name multiple semantically related items, opposing effects can be found. Semantic facilitation is found when naming 2 semantically related items in a row. In contrast, semantic interference is found when speakers name semantically related items separated by 1 or more intervening unrelated items. This latter form of interference is…

  13. Semantic Interference and Facilitation: Understanding the Integration of Spatial Distance and Conceptual Similarity During Sentence Reading.

    PubMed

    Guerra, Ernesto; Knoeferle, Pia

    2018-01-01

    Existing evidence has shown a processing advantage (or facilitation) when representations derived from a non-linguistic context (spatial proximity depicted by gambling cards moving together) match the semantic content of an ensuing sentence. A match, inspired by conceptual metaphors such as 'similarity is closeness' would, for instance, involve cards moving closer together and the sentence relates similarity between abstract concepts such as war and battle. However, other studies have reported a disadvantage (or interference) for congruence between the semantic content of a sentence and representations of spatial distance derived from this sort of non-linguistic context. In the present article, we investigate the cognitive mechanisms underlying the interaction between the representations of spatial distance and sentence processing. In two eye-tracking experiments, we tested the predictions of a mechanism that considers the competition, activation, and decay of visually and linguistically derived representations as key aspects in determining the qualitative pattern and time course of that interaction. Critical trials presented two playing cards, each showing a written abstract noun; the cards turned around, obscuring the nouns, and moved either farther apart or closer together. Participants then read a sentence expressing either semantic similarity or difference between these two nouns. When instructed to attend to the nouns on the cards (Experiment 1), participants' total reading times revealed interference between spatial distance (e.g., closeness) and semantic relations (similarity) as soon as the sentence explicitly conveyed similarity. But when instructed to attend to the cards (Experiment 2), cards approaching (vs. moving apart) elicited first interference (when similarity was implicit) and then facilitation (when similarity was made explicit) during sentence reading. We discuss these findings in the context of a competition mechanism of interference and

  14. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    ERIC Educational Resources Information Center

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  15. Disentangling Genuine Semantic Stroop Effects in Reading from Contingency Effects: On the Need for Two Neutral Baselines

    PubMed Central

    Lorentz, Eric; McKibben, Tessa; Ekstrand, Chelsea; Gould, Layla; Anton, Kathryn; Borowsky, Ron

    2016-01-01

    The automaticity of reading is often explored through the Stroop effect, whereby color-naming is affected by color words. Color associates (e.g., “sky”) also produce a Stroop effect, suggesting that automatic reading occurs through to the level of semantics, even when reading sub-lexically (e.g., the pseudohomophone “skigh”). However, several previous experiments have confounded congruency with contingency learning, whereby faster responding occurs for more frequent stimuli. Contingency effects reflect a higher frequency-pairing of the word with a font color in the congruent condition than in the incongruent condition due to the limited set of congruent pairings. To determine the extent to which the Stroop effect can be attributed to contingency learning of font colors paired with lexical (word-level) and sub-lexical (phonetically decoded) letter strings, as well as assess facilitation and interference relative to contingency effects, we developed two neutral baselines: each one matched on pair-frequency for congruent and incongruent color words. In Experiments 1 and 3, color words (e.g., “blue”) and their pseudohomophones (e.g., “bloo”) produced significant facilitation and interference relative to neutral baselines, regardless of whether the onset (i.e., first phoneme) was matched to the color words. Color associates (e.g., “ocean”) and their pseudohomophones (e.g., “oshin”), however, showed no significant facilitation or interference relative to onset matched neutral baselines (Experiment 2). When onsets were unmatched, color associate words produced consistent facilitation on RT (e.g., “ocean” vs. “dozen”), but pseudohomophones (e.g., “oshin” vs. “duhzen”) failed to produce facilitation or interference. Our findings suggest that the Stroop effects for color and associated stimuli are sensitive to the type of neutral baseline used, as well as stimulus type (word vs. pseudohomophone). In general, contingency learning plays

  16. Generative Semantics.

    ERIC Educational Resources Information Center

    King, Margaret

    The first section of this paper deals with the attempts within the framework of transformational grammar to make semantics a systematic part of linguistic description, and outlines the characteristics of the generative semantics position. The second section takes a critical look at generative semantics in its later manifestations, and makes a case…

  17. Preserved Discrimination Performance and Neural Processing during Crossmodal Attention in Aging

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2013-01-01

    In a recent study in younger adults (19-29 year olds) we showed evidence that distributed audiovisual attention resulted in improved discrimination performance for audiovisual stimuli compared to focused visual attention. Here, we extend our findings to healthy older adults (60-90 year olds), showing that performance benefits of distributed audiovisual attention in this population match those of younger adults. Specifically, improved performance was revealed in faster response times for semantically congruent audiovisual stimuli during distributed relative to focused visual attention, without any differences in accuracy. For semantically incongruent stimuli, discrimination accuracy was significantly improved during distributed relative to focused attention. Furthermore, event-related neural processing showed intact crossmodal integration in higher performing older adults similar to younger adults. Thus, there was insufficient evidence to support an age-related deficit in crossmodal attention. PMID:24278464

  18. Real-time Astrometry Using Phase Congruency

    NASA Astrophysics Data System (ADS)

    Lambert, A.; Polo, M.; Tang, Y.

    Phase congruency is a computer vision technique that proves to perform well for determining the tracks of optical objects (Flewelling, AMOS 2014). We report on a real-time implementation of this using an FPGA and CMOS Image Sensor, with on-sky data. The lightweight instrument can provide tracking update signals to the mount of the telescope, as well as determine abnormal objects in the scene.

  19. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL... audiovisual, cartographic, and related records? The disposition instructions should also provide that...

  20. Psychophysiological effects of audiovisual stimuli during cycle exercise.

    PubMed

    Barreto-Silva, Vinícius; Bigliassi, Marcelo; Chierotti, Priscila; Altimari, Leandro R

    2018-05-01

    Immersive environments induced by audiovisual stimuli are hypothesised to facilitate the control of movements and ameliorate fatigue-related symptoms during exercise. The objective of the present study was to investigate the effects of pleasant and unpleasant audiovisual stimuli on perceptual and psychophysiological responses during moderate-intensity exercises performed on an electromagnetically braked cycle ergometer. Twenty young adults were administered three experimental conditions in a randomised and counterbalanced order: unpleasant stimulus (US; e.g. images depicting laboured breathing); pleasant stimulus (PS; e.g. images depicting pleasant emotions); and neutral stimulus (NS; e.g. neutral facial expressions). The exercise had 10 min of duration (2 min of warm-up + 6 min of exercise + 2 min of warm-down). During all conditions, the rate of perceived exertion and heart rate variability were monitored to further understanding of the moderating influence of audiovisual stimuli on perceptual and psychophysiological responses, respectively. The results of the present study indicate that PS ameliorated fatigue-related symptoms and reduced the physiological stress imposed by the exercise bout. Conversely, US increased the global activity of the autonomic nervous system and increased exertional responses to a greater degree when compared to PS. Accordingly, audiovisual stimuli appear to induce a psychophysiological response in which individuals visualise themselves within the story presented in the video. In such instances, individuals appear to copy the behaviour observed in the videos as if the situation was real. This mirroring mechanism has the potential to up-/down-regulate the cardiac work as if in fact the exercise intensities were different in each condition.

  1. On Response Bias in the Face Congruency Effect for Internal and External Features

    PubMed Central

    Meinhardt, Günter; Meinhardt-Injac, Bozana; Persike, Malte

    2017-01-01

    Some years ago Cheung et al. (2008) proposed the complete design (CD) for measuring the failure of selective attention in composite objects. Since the CD is a fully balanced design, analysis of response bias may reveal potential effects of the experimental manipulation, the stimulus material, and/or attributes of the observers. Here we used the CD to prove whether external features modulate perception of internal features with the context congruency paradigm (Nachson et al., 1995; Meinhardt-Injac et al., 2010) in a larger sample of N = 303 subjects. We found a large congruency effect (Cohen's d = 1.78), which was attenuated by face inversion (d = 1.32). The congruency relation also strongly modulated response bias. In incongruent trials the proportion of “different” responses was much larger than in congruent trials (d = 0.79), which was again attenuated by face inversion (d = 0.43). Because in incongruent trials the wholes formed by integrating external and internal features are always different, while in congruent trials same and different wholes occur with the same frequency, a congruency related bias effect is expected from holistic integration. Our results suggest two behavioral markers of holistic processing in the context congruency paradigm: a performance advantage in congruent compared to incongruent trials, and a tendency toward more “different” responses in incongruent, compared to congruent trials. Since the results for both markers differed only quantitatively in upright and inverted presentation, our findings indicate no change of the face processing mode by picture plane rotation. A potential transfer to the composite face paradigm is discussed. PMID:29089880

  2. Criminal Justice Audiovisual Materials Directory.

    ERIC Educational Resources Information Center

    Law Enforcement Assistance Administration (Dept. of Justice), Washington, DC.

    This source directory of audiovisual materials for the education, training, and orientation of those in the criminal justice field is divided into five parts covering the courts, police techniques and training, prevention, prisons and rehabilitation/correction, and public education. Each entry includes a brief description of the product, the time…

  3. Criminal Justice Audiovisual Materials Directory.

    ERIC Educational Resources Information Center

    Law Enforcement Assistance Administration (Dept. of Justice), Washington, DC.

    This is the third edition of a source directory of audiovisual materials for the education, training, and orientation of those in the criminal justice field. It is divided into five parts covering the courts, police techniques and training, prevention, prisons and rehabilitation/correction, and public education. Each entry includes a brief…

  4. Planning and Producing Audiovisual Materials.

    ERIC Educational Resources Information Center

    Kemp, Jerrold E.

    The first few chapters of this book are devoted to an examination of the changing character of audiovisual materials; instructional design and the selection of media to serve specific objectives; and principles of perception, communication, and learning. Relevant research findings in the field are reviewed. The basic techniques of planning…

  5. Medial Unicondylar Knee Arthroplasty Improves Patellofemoral Congruence: a Possible Mechanistic Explanation for Poor Association Between Patellofemoral Degeneration and Clinical Outcome.

    PubMed

    Thein, Ran; Zuiderbaan, Hendrik A; Khamaisy, Saker; Nawabi, Danyal H; Poultsides, Lazaros A; Pearle, Andrew D

    2015-11-01

    The purpose was to determine the effect of medial fixed bearing unicondylar knee arthroplasty (UKA) on postoperative patellofemoral joint (PFJ) congruence and analyze the relationship of preoperative PFJ degeneration on clinical outcome. We retrospectively reviewed 110 patients (113 knees) who underwent medial UKA. Radiographs were evaluated to ascertain PFJ degenerative changes and congruence. Clinical outcomes were assessed preoperatively and postoperatively. The postoperative absolute patellar congruence angle (10.05 ± 10.28) was significantly improved compared with the preoperative value (14.23 ± 11.22) (P = 0.0038). No correlation was found between preoperative PFJ congruence or degeneration severity, and WOMAC scores at two-year follow up. Pre-operative PFJ congruence and degenerative changes do not affect UKA clinical outcomes. This finding may be explained by the post-op PFJ congruence improvement. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Trust estimation of the semantic web using semantic web clustering

    NASA Astrophysics Data System (ADS)

    Shirgahi, Hossein; Mohsenzadeh, Mehran; Haj Seyyed Javadi, Hamid

    2017-05-01

    Development of semantic web and social network is undeniable in the Internet world these days. Widespread nature of semantic web has been very challenging to assess the trust in this field. In recent years, extensive researches have been done to estimate the trust of semantic web. Since trust of semantic web is a multidimensional problem, in this paper, we used parameters of social network authority, the value of pages links authority and semantic authority to assess the trust. Due to the large space of semantic network, we considered the problem scope to the clusters of semantic subnetworks and obtained the trust of each cluster elements as local and calculated the trust of outside resources according to their local trusts and trust of clusters to each other. According to the experimental result, the proposed method shows more than 79% Fscore that is about 11.9% in average more than Eigen, Tidal and centralised trust methods. Mean of error in this proposed method is 12.936, that is 9.75% in average less than Eigen and Tidal trust methods.

  7. Impact of value congruence on work-family conflicts: the mediating role of work-related support.

    PubMed

    Pan, Su-Ying; Yeh, Ying-Jung Yvonne

    2012-01-01

    Based on past research regarding the relationship between person-environment fit and work-family conflict (WFC), we examined the mediating effects of perceived organization/supervisor support on the relationship between person-organization/supervisor value congruence and WFC. A structural equation model was used to test three hypotheses using data collected from 637 workers in Taiwan. Person-organization value congruence regarding role boundaries was found to be positively correlated with employee perception of organizational support, resulting in reduced WFC. Person-supervisor value congruence regarding role boundaries also increased employee perception of organizational support, mediated by perceived supervisor support. Research and managerial implications are discussed.

  8. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    PubMed

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  9. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256

  10. Audio-visual temporal perception in children with restored hearing.

    PubMed

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  11. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    PubMed

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  13. Guidelines for Audiovisual and Multimedia Materials in Libraries and Other Institutions. Audiovisual and Multimedia Section

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions (NJ1), 2004

    2004-01-01

    This set of guidelines, for audiovisual and multimedia materials in libraries of all kinds and other appropriate institutions, is the product of many years of consultation and collaborative effort. As early as 1972, The UNESCO (United Nations Educational, Scientific and Cultural Organization) Public Library Manifesto had stressed the need for…

  14. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing

  15. Space-valence priming with subliminal and supraliminal words.

    PubMed

    Ansorge, Ulrich; Khalid, Shah; König, Peter

    2013-01-01

    To date it is unclear whether (1) awareness-independent non-evaluative semantic processes influence affective semantics and whether (2) awareness-independent affective semantics influence non-evaluative semantic processing. In the current study, we investigated these questions with the help of subliminal (masked) primes and visible targets in a space-valence across-category congruence effect. In line with (1), we found that subliminal space prime words influenced valence classification of supraliminal target words (Experiment 1): classifications were faster with a congruent prime (e.g., the prime "up" before the target "happy") than with an incongruent prime (e.g., the prime "up" before the target "sad"). In contrast to (2), no influence of subliminal valence primes on the classification of supraliminal space targets into up- and down-words was found (Experiment 2). Control conditions showed that standard masked response priming effects were found with both subliminal prime types, and that an across-category congruence effect was also found with supraliminal valence primes and spatial target words. The final Experiment 3 confirmed that the across-category congruence effect indeed reflected priming of target categorization of a relevant meaning category. Together, the data jointly confirmed prediction (1) that awareness-independent non-evaluative semantic priming influences valence judgments.

  16. Space-Valence Priming with Subliminal and Supraliminal Words

    PubMed Central

    Ansorge, Ulrich; Khalid, Shah; König, Peter

    2013-01-01

    To date it is unclear whether (1) awareness-independent non-evaluative semantic processes influence affective semantics and whether (2) awareness-independent affective semantics influence non-evaluative semantic processing. In the current study, we investigated these questions with the help of subliminal (masked) primes and visible targets in a space-valence across-category congruence effect. In line with (1), we found that subliminal space prime words influenced valence classification of supraliminal target words (Experiment 1): classifications were faster with a congruent prime (e.g., the prime “up” before the target “happy”) than with an incongruent prime (e.g., the prime “up” before the target “sad”). In contrast to (2), no influence of subliminal valence primes on the classification of supraliminal space targets into up- and down-words was found (Experiment 2). Control conditions showed that standard masked response priming effects were found with both subliminal prime types, and that an across-category congruence effect was also found with supraliminal valence primes and spatial target words. The final Experiment 3 confirmed that the across-category congruence effect indeed reflected priming of target categorization of a relevant meaning category. Together, the data jointly confirmed prediction (1) that awareness-independent non-evaluative semantic priming influences valence judgments. PMID:23439863

  17. How Children and Adults Produce and Perceive Uncertainty in Audiovisual Speech

    ERIC Educational Resources Information Center

    Krahmer, Emiel; Swerts, Marc

    2005-01-01

    We describe two experiments on signaling and detecting uncertainty in audiovisual speech by adults and children. In the first study, utterances from adult speakers and child speakers (aged 7-8) were elicited and annotated with a set of six audiovisual features. It was found that when adult speakers were uncertain they were more likely to produce…

  18. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    ERIC Educational Resources Information Center

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  19. Judging emotional congruency: Explicit attention to situational context modulates processing of facial expressions of emotion.

    PubMed

    Diéguez-Risco, Teresa; Aguado, Luis; Albert, Jacobo; Hinojosa, José Antonio

    2015-12-01

    The influence of explicit evaluative processes on the contextual integration of facial expressions of emotion was studied in a procedure that required the participants to judge the congruency of happy and angry faces with preceding sentences describing emotion-inducing situations. Judgments were faster on congruent trials in the case of happy faces and on incongruent trials in the case of angry faces. At the electrophysiological level, a congruency effect was observed in the face-sensitive N170 component that showed larger amplitudes on incongruent trials. An interactive effect of congruency and emotion appeared on the LPP (late positive potential), with larger amplitudes in response to happy faces that followed anger-inducing situations. These results show that the deliberate intention to judge the contextual congruency of facial expressions influences not only processes involved in affective evaluation such as those indexed by the LPP but also earlier processing stages that are involved in face perception. Copyright © 2015. Published by Elsevier B.V.

  20. Among nonagenarians, congruence between self-rated and proxy-rated health was low but both predicted mortality.

    PubMed

    Vuorisalmi, Merja; Sarkeala, Tytti; Hervonen, Antti; Jylhä, Marja

    2012-05-01

    The congruence between self-rated global health (SRH) and proxy-rated global health (PRH), the factors associated with congruence between SRH and PRH, and their associations with mortality are examined using data from the Vitality 90+ study. The data consist of 213 pairs of subjects--aged 90 years and older--and proxies. The relationship between SRH and PRH was analyzed by chi-square test and Cohen's kappa. Logistic regression analysis was used to find out the factors that are associated with the congruence between health ratings. The association between SRH and PRH with mortality was studied using Cox proportional hazard models. The subjects rated their health more negatively than the proxies. Kappa value indicated only slight congruence between SRH and PRH, and they also predicted mortality differently. Good self-reported functional ability was associated with congruence between SRH and PRH. The results imply that the evaluation processes of SRH and PRH differ, and the measures are not directly interchangeable. Both measures are useful health indicators in very old age but SRH cannot be replaced by PRH in analyses. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Pain Anxiety and Its Association With Pain Congruence Trajectories During the Cold Pressor Task.

    PubMed

    Clark, Shannon M; Cano, Annmarie; Goubert, Liesbet; Vlaeyen, Johan W S; Wurm, Lee H; Corley, Angelia M

    2017-04-01

    Incongruence of pain severity ratings among people experiencing pain and their observers has been linked to psychological distress. Previous studies have measured pain rating congruence through static self-report, involving a single rating of pain; however, this method does not capture changes in ratings over time. The present study examined the extent to which partners were congruent on multiple ratings of a participants' pain severity during the cold pressor task. Furthermore, 2 components of pain anxiety-pain catastrophizing and perceived threat-were examined as predictors of pain congruence. Undergraduate couples in a romantic relationship (N = 127 dyads) participated in this study. Both partners completed measures of pain catastrophizing and perceived threat before randomization to their cold pressor participant or observer roles. Participants and observers rated the participant's pain in writing several times over the course of the task. On average, observers rated participants' pain as less severe than participants' rated their own pain. In addition, congruence between partners increased over time because of observers' ratings becoming more similar to participant's ratings. Finally, pain catastrophizing and perceived threat independently and jointly influenced the degree to which partners similarly rated the participant's pain. This article presents a novel application of the cold pressor task to show that pain rating congruence among romantic partners changes over time. These findings indicate that pain congruence is not static and is subject to pain anxiety in both partners. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.

  2. "Pre-Semantic" Cognition Revisited: Critical Differences between Semantic Aphasia and Semantic Dementia

    ERIC Educational Resources Information Center

    Jefferies, Elizabeth; Rogers, Timothy T.; Hopper, Samantha; Lambon Ralph, Matthew A.

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are…

  3. Different levels of learning interact to shape the congruency sequence effect.

    PubMed

    Weissman, Daniel H; Hawks, Zoë W; Egner, Tobias

    2016-04-01

    The congruency effect in distracter interference tasks is often reduced after incongruent relative to congruent trials. Moreover, this congruency sequence effect (CSE) is influenced by learning related to concrete stimulus and response features as well as by learning related to abstract cognitive control processes. There is an ongoing debate, however, over whether interactions between these learning processes are best explained by an episodic retrieval account, an adaptation by binding account, or a cognitive efficiency account of the CSE. To make this distinction, we orthogonally manipulated the expression of these learning processes in a novel factorial design involving the prime-probe arrow task. In Experiment 1, these processes interacted in an over-additive fashion to influence CSE magnitude. In Experiment 2, we replicated this interaction while showing it was not driven by conditional differences in the size of the congruency effect. In Experiment 3, we ruled out an alternative account of this interaction as reflecting conditional differences in learning related to concrete stimulus and response features. These findings support an episodic retrieval account of the CSE, in which repeating a stimulus feature from the previous trial facilitates the retrieval and use of previous-trial control parameters, thereby boosting control in the current trial. In contrast, they do not fit with (a) an adaptation by binding account, in which CSE magnitude is directly related to the size of the congruency effect, or (b) a cognitive efficiency account, in which costly control processes are recruited only when behavioral adjustments cannot be mediated by low-level associative mechanisms. (c) 2016 APA, all rights reserved).

  4. Semantic Desktop

    NASA Astrophysics Data System (ADS)

    Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar

    In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.

  5. Audio-visual sensory deprivation degrades visuo-tactile peri-personal space.

    PubMed

    Noel, Jean-Paul; Park, Hyeong-Dong; Pasqualini, Isabella; Lissek, Herve; Wallace, Mark; Blanke, Olaf; Serino, Andrea

    2018-05-01

    Self-perception is scaffolded upon the integration of multisensory cues on the body, the space surrounding the body (i.e., the peri-personal space; PPS), and from within the body. We asked whether reducing information available from external space would change: PPS, interoceptive accuracy, and self-experience. Twenty participants were exposed to 15 min of audio-visual deprivation and performed: (i) a visuo-tactile interaction task measuring their PPS; (ii) a heartbeat perception task measuring interoceptive accuracy; and (iii) a series of questionnaires related to self-perception and mental illness. These tasks were carried out in two conditions: while exposed to a standard sensory environment and under a condition of audio-visual deprivation. Results suggest that while PPS becomes ill defined after audio-visual deprivation, interoceptive accuracy is unaltered at a group-level, with some participants improving and some worsening in interoceptive accuracy. Interestingly, correlational individual differences analyses revealed that changes in PPS after audio-visual deprivation were related to interoceptive accuracy and self-reports of "unusual experiences" on an individual subject basis. Taken together, the findings argue for a relationship between the malleability of PPS, interoceptive accuracy, and an inclination toward aberrant ideation often associated with mental illness. Copyright © 2018. Published by Elsevier Inc.

  6. Catching Audiovisual Interactions With a First-Person Fisherman Video Game.

    PubMed

    Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2017-07-01

    The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.

  7. Evaluating and interpreting cross-taxon congruence: Potential pitfalls and solutions

    NASA Astrophysics Data System (ADS)

    Gioria, Margherita; Bacaro, Giovanni; Feehan, John

    2011-05-01

    Characterizing the relationship between different taxonomic groups is critical to identify potential surrogates for biodiversity. Previous studies have shown that cross-taxa relationships are generally weak and/or inconsistent. The difficulties in finding predictive patterns have often been attributed to the spatial and temporal scales of these studies and on the differences in the measure used to evaluate such relationships (species richness versus composition). However, the choice of the analytical approach used to evaluate cross-taxon congruence inevitably represents a major source of variation. Here, we described the use of a range of methods that can be used to comprehensively assess cross-taxa relationships. To do so, we used data for two taxonomic groups, wetland plants and water beetles, collected from 54 farmland ponds in Ireland. Specifically, we used the Pearson correlation and rarefaction curves to analyse patterns in species richness, while Mantel tests, Procrustes analysis, and co-correspondence analysis were used to evaluate congruence in species composition. We compared the results of these analyses and we described some of the potential pitfalls associated with the use of each of these statistical approaches. Cross-taxon congruence was moderate to strong, depending on the choice of the analytical approach, on the nature of the response variable, and on local and environmental conditions. Our findings indicate that multiple approaches and measures of community structure are required for a comprehensive assessment of cross-taxa relationships. In particular, we showed that selection of surrogate taxa in conservation planning should not be based on a single statistic expressing the degree of correlation in species richness or composition. Potential solutions to the analytical issues associated with the assessment of cross-taxon congruence are provided and the implications of our findings in the selection of surrogates for biodiversity are discussed.

  8. Reducing Stereotype Threat With Embodied Triggers: A Case of Sensorimotor-Mental Congruence.

    PubMed

    Chalabaev, Aïna; Radel, Rémi; Masicampo, E J; Dru, Vincent

    2016-08-01

    In four experiments, we tested whether embodied triggers may reduce stereotype threat. We predicted that left-side sensorimotor inductions would increase cognitive performance under stereotype threat, because such inductions are linked to avoidance motivation among right-handers. This sensorimotor-mental congruence hypothesis rests on regulatory fit research showing that stereotype threat may be reduced by avoidance-oriented interventions, and motor congruence research showing positive effects when two parameters of a motor action activate the same motivational system (avoidance or approach). Results indicated that under stereotype threat, cognitive performance was higher when participants contracted their left hand (Study 1) or when the stimuli were presented on the left side of the visual field (Studies 2-4), as compared with right-hand contraction or right-side visual stimulation. These results were observed on math (Studies 1, 2, and 4) and Stroop (Study 3) performance. An indirect effect of congruence on math performance through subjective fluency was also observed. © 2016 by the Society for Personality and Social Psychology, Inc.

  9. Enhancing audiovisual experience with haptic feedback: a survey on HAV.

    PubMed

    Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M

    2013-01-01

    Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.

  10. Judgments of auditory-visual affective congruence in adolescents with and without autism: a pilot study of a new task using fMRI.

    PubMed

    Loveland, Katherine A; Steinberg, Joel L; Pearson, Deborah A; Mansour, Rosleen; Reddoch, Stacy

    2008-10-01

    One of the most widely reported developmental deficits associated with autism is difficulty perceiving and expressing emotion appropriately. Brain activation associated with performance on a new task, the Emotional Congruence Task, requires judging affective congruence of facial expression and voice, compared with their sex congruence. Participants in this pilot study were adolescents with normal IQ (n = 5) and autism or without (n = 4) autism. In the emotional congruence condition, as compared to the sex congruence of voice and face, controls had significantly more activation than the Autism group in the orbitofrontal cortex, the superior temporal, parahippocampal, and posterior cingulate gyri and occipital regions. Unlike controls, the Autism group did not have significantly greater prefrontal activation during the emotional congruence condition, but did during the sex congruence condition. Results indicate the Emotional Congruence Task can be used successfully to assess brain activation and behavior associated with integration of auditory and visual information for emotion. While the numbers in the groups are small, the results suggest that brain activity while performing the Emotional Congruence Task differed between adolescents with and without autism in fronto-limbic areas and in the superior temporal region. These findings must be confirmed using larger samples of participants.

  11. Congruences between modular forms: raising the level and dropping Euler factors.

    PubMed

    Diamond, F

    1997-10-14

    We discuss the relationship among certain generalizations of results of Hida, Ribet, and Wiles on congruences between modular forms. Hida's result accounts for congruences in terms of the value of an L-function, and Ribet's result is related to the behavior of the period that appears there. Wiles' theory leads to a class number formula relating the value of the L-function to the size of a Galois cohomology group. The behavior of the period is used to deduce that a formula at "nonminimal level" is obtained from one at "minimal level" by dropping Euler factors from the L-function.

  12. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  13. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    PubMed

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  14. The performance of the Congruence Among Distance Matrices (CADM) test in phylogenetic analysis

    PubMed Central

    2011-01-01

    Background CADM is a statistical test used to estimate the level of Congruence Among Distance Matrices. It has been shown in previous studies to have a correct rate of type I error and good power when applied to dissimilarity matrices and to ultrametric distance matrices. Contrary to most other tests of incongruence used in phylogenetic analysis, the null hypothesis of the CADM test assumes complete incongruence of the phylogenetic trees instead of congruence. In this study, we performed computer simulations to assess the type I error rate and power of the test. It was applied to additive distance matrices representing phylogenies and to genetic distance matrices obtained from nucleotide sequences of different lengths that were simulated on randomly generated trees of varying sizes, and under different evolutionary conditions. Results Our results showed that the test has an accurate type I error rate and good power. As expected, power increased with the number of objects (i.e., taxa), the number of partially or completely congruent matrices and the level of congruence among distance matrices. Conclusions Based on our results, we suggest that CADM is an excellent candidate to test for congruence and, when present, to estimate its level in phylogenomic studies where numerous genes are analysed simultaneously. PMID:21388552

  15. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  16. Teaching with audiovisual recordings of consultations

    PubMed Central

    Davis, R. H.; Jenkins, M.; Smail, S. A.; Stott, N. C. H.; Verby, J.; Wallace, B. B.

    1980-01-01

    The experience gained from two years' teaching with audiovisual recordings of consultations of both undergraduates and postgraduates is presented. Some basic teaching rules are suggested and further applications of the technique are discussed. ImagesFigure 1.Figure 2.Figure 3. PMID:6157811

  17. Semantic framework for mapping object-oriented model to semantic web languages

    PubMed Central

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  18. Semantic framework for mapping object-oriented model to semantic web languages.

    PubMed

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  19. Color selectivity of the spatial congruency effect: evidence from the focused attention paradigm.

    PubMed

    Makovac, Elena; Gerbino, Walter

    2014-01-01

    The multisensory response enhancement (MRE), occurring when the response to a visual target integrated with a spatially congruent sound is stronger than the response to the visual target alone, is believed to be mediated by the superior colliculus (SC) (Stein & Meredith, 1993). Here, we used a focused attention paradigm to show that the spatial congruency effect occurs with red (SC-effective) but not blue (SC-ineffective) visual stimuli, when presented with spatially congruent sounds. To isolate the chromatic component of SC-ineffective targets and to demonstrate the selectivity of the spatial congruency effect we used the random luminance modulation technique (Experiment 1) and the tritanopic technique (Experiment 2). Our results indicate that the spatial congruency effect does not require the distribution of attention over different sensory modalities and provide correlational evidence that the SC mediates the effect.

  20. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    PubMed

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  2. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  3. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    PubMed

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Caregivers' Retirement Congruency: A Case for Caregiver Support

    ERIC Educational Resources Information Center

    Humble, Aine M.; Keefe, Janice M.; Auton, Greg M.

    2012-01-01

    Using the concept of "retirement congruency" (RC), which takes into account greater variation in retirement decisions (low, moderate, or high RC) than a dichotomous conceptualization (forced versus chosen), multinomial logistic regression was conducted on a sample of caregivers from the 2002 Canadian General Social Survey who were…

  5. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  6. Contextual Congruency Effect in Natural Scene Categorization: Different Strategies in Humans and Monkeys (Macaca mulatta)

    PubMed Central

    Collet, Anne-Claire; Fize, Denis; VanRullen, Rufin

    2015-01-01

    Rapid visual categorization is a crucial ability for survival of many animal species, including monkeys and humans. In real conditions, objects (either animate or inanimate) are never isolated but embedded in a complex background made of multiple elements. It has been shown in humans and monkeys that the contextual background can either enhance or impair object categorization, depending on context/object congruency (for example, an animal in a natural vs. man-made environment). Moreover, a scene is not only a collection of objects; it also has global physical features (i.e phase and amplitude of Fourier spatial frequencies) which help define its gist. In our experiment, we aimed to explore and compare the contribution of the amplitude spectrum of scenes in the context-object congruency effect in monkeys and humans. We designed a rapid visual categorization task, Animal versus Non-Animal, using as contexts both real scenes photographs and noisy backgrounds built from the amplitude spectrum of real scenes but with randomized phase spectrum. We showed that even if the contextual congruency effect was comparable in both species when the context was a real scene, it differed when the foreground object was surrounded by a noisy background: in monkeys we found a similar congruency effect in both conditions, but in humans the congruency effect was absent (or even reversed) when the context was a noisy background. PMID:26207915

  7. Medial unicompartmental knee arthroplasty improves congruence and restores joint space width of the lateral compartment.

    PubMed

    Khamaisy, Saker; Zuiderbaan, Hendrik A; van der List, Jelle P; Nam, Denis; Pearle, Andrew D

    2016-06-01

    Osteoarthritic progression of the lateral compartment remains a leading indication for medial unicompartmental knee arthroplasty (UKA) revision. Therefore, the purpose of this study was to evaluate the alterations of the lateral compartment congruence and joint space width (JSW) following medial UKA. Retrospectively, lateral compartment congruence and JSW were evaluated in 174 knees (74 females, 85 males, mean age 65.5years; SD±10.1) preoperatively and six weeks postoperatively, and compared to 41 healthy knees (26 men, 15 women, mean age 33.7years; SD±6.4). Congruence (CI) was calculated using validated software that evaluates the geometric relationship between surfaces and calculates a congruence index (CI). JSW was measured on three sides (inner, middle, outer) by subdividing the lateral compartment into four quarters. The CI of the control group was 0.98 (SD±0.01). The preoperative CI was 0.88 (SD±0.01), which improved significantly to 0.93 (SD±0.03) postoperatively (p<0.001). In 82% of knees, CI improved after surgery, while in 18% it decreased. The preoperative significant JSW differences of the inner (p<0.001) and outer JSW (p<0.001) were absent postoperatively. Our data suggests that a well-conducted medial UKA not only resurfaces the medial compartment but also improves congruence and restores the JSW of the lateral compartment. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Audiovisual Media for Computer Education.

    ERIC Educational Resources Information Center

    Van Der Aa, H. J., Ed.

    The result of an international survey, this catalog lists over 450 films dealing with computing methods and automation and is intended for those who wish to use audiovisual displays as a means of instruction of computer education. The catalog gives the film's title, running time, and producer and tells whether the film is color or black-and-white,…

  9. Audiovisual Speech Recalibration in Children

    ERIC Educational Resources Information Center

    van Linden, Sabine; Vroomen, Jean

    2008-01-01

    In order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on…

  10. Shifts in Audiovisual Processing in Healthy Aging.

    PubMed

    Baum, Sarah H; Stevenson, Ryan

    2017-09-01

    The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes. Work in the last five years on bottom-up influences of sensory perception has garnered significant attention. Temporal processing, a driving factors of multisensory integration, has now been shown to decouple with multisensory integration in aging, despite their co-decline with aging. The impact of stimulus effectiveness also changes with age, where older adults show maximal benefit from multisensory gain at high signal-to-noise ratios. Following sensory decline, high working memory capacities have now been shown to be somewhat of a protective factor against age-related declines in audiovisual speech perception, particularly in noise. Finally, newer research is emerging focusing on the general intra-individual variability observed with aging. Overall, the studies of the past five years have replicated and expanded on previous work that highlights the role of bottom-up sensory changes with aging and their influence on audiovisual integration, as well as the top-down influence of working memory.

  11. Value congruence, control, sense of community and demands as determinants of burnout syndrome among hospitality workers.

    PubMed

    Asensio-Martínez, Ángela; Leiter, Michael P; Gascón, Santiago; Gumuchian, Stephanie; Masluk, Bárbara; Herrera-Mercadal, Paola; Albesa, Agustín; García-Campayo, Javier

    2017-09-07

    Employees working in the hospitality industry are constantly exposed to occupational stressors that may lead employees into experiencing burnout syndrome. Research addressing the interactive effects of control, community and value congruence to alleviate the impact of workplace demands on experiencing burnout is relatively limited. The present study examined relationships among control, community and value congruence, workplace demands and the three components of burnout. A sample of 418 employees working in a variety of hospitality associations including restaurants and hotels in Spain were recruited. Moderation analyses and linear regressions analyzed the predictive power of control, community and value congruence as moderating variables. Results indicate that control, community and value congruence were successful buffers in the relationships between workplace demands and the burnout dimensions. The present findings offer suggestions for future research on potential moderating variables, as well as implications for reducing burnout among hospitality employees.

  12. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification.

    PubMed

    Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor

    2014-08-01

    The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

  13. Neural correlates of the implicit association test: evidence for semantic and emotional processing.

    PubMed

    Williams, John K; Themanson, Jason R

    2011-09-01

    The implicit association test (IAT) has been widely used in social cognitive research over the past decade. Controversies have arisen over what cognitive processes are being tapped into using this task. While most models use behavioral (RT) results to support their claims, little research has examined neurocognitive correlates of these behavioral measures. The present study measured event-related brain potentials (ERPs) of participants while completing a gay-straight IAT in order to further understand the processes involved in a typical group bias IAT. Results indicated significantly smaller N400 amplitudes and significantly larger LPP amplitudes for compatible trials than for incompatible trials, suggesting that both the semantic and emotional congruence of stimuli paired together in an IAT trial contribute to the typical RT differences found, while no differences were present for earlier ERP components including the N2. These findings are discussed with respect to early and late processing in group bias IATs.

  14. Personal semantics: at the crossroads of semantic and episodic memory.

    PubMed

    Renoult, Louis; Davidson, Patrick S R; Palombo, Daniela J; Moscovitch, Morris; Levine, Brian

    2012-11-01

    Declarative memory is usually described as consisting of two systems: semantic and episodic memory. Between these two poles, however, may lie a third entity: personal semantics (PS). PS concerns knowledge of one's past. Although typically assumed to be an aspect of semantic memory, it is essentially absent from existing models of knowledge. Furthermore, like episodic memory (EM), PS is idiosyncratically personal (i.e., not culturally-shared). We show that, depending on how it is operationalized, the neural correlates of PS can look more similar to semantic memory, more similar to EM, or dissimilar to both. We consider three different perspectives to better integrate PS into existing models of declarative memory and suggest experimental strategies for disentangling PS from semantic and episodic memory. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. The Golden Mean and an Intriguing Congruence Problem.

    ERIC Educational Resources Information Center

    Pagni, David L.; Gannon, Gerald E.

    1981-01-01

    Presented is a method for finding two triangles that have five pairs of congruent parts, yet fail to be congruent. The solution is thought to involve some creative insights that should challenge both the teacher and students to recall and analyze all the congruence axioms and theorems. (MP)

  16. Congruence between Disabled Elders and Their Primary Caregivers

    ERIC Educational Resources Information Center

    Horowitz, Amy; Goodman, Caryn R.; Reinhardt, Joann P.

    2004-01-01

    Purpose: This study examines the extent and independent correlates of congruence between disabled elders and their caregivers on several aspects of the caregiving experience. Design and Methods: Participants were 117 visually impaired elders and their caregivers. Correlational analyses, kappa statistics, and paired t tests were used to examine the…

  17. Values Congruence: Its Effect on Perceptions of Montana Elementary School Principal Leadership Practices and Student Achievement

    ERIC Educational Resources Information Center

    Zorn, Daniel Roy

    2010-01-01

    The purpose of this quantitative study was to examine the relationship between principal and teacher values congruence and perceived principal leadership practices. Additionally, this study considered the relationship between values congruence, leadership practices, and student achievement. The perceptions teachers hold regarding their principal's…

  18. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  19. Phylogenetic congruence and ecological coherence in terrestrial Thaumarchaeota.

    PubMed

    Oton, Eduard Vico; Quince, Christopher; Nicol, Graeme W; Prosser, James I; Gubry-Rangin, Cécile

    2016-01-01

    Thaumarchaeota form a ubiquitously distributed archaeal phylum, comprising both the ammonia-oxidising archaea (AOA) and other archaeal groups in which ammonia oxidation has not been demonstrated (including Group 1.1c and Group 1.3). The ecology of AOA in terrestrial environments has been extensively studied using either a functional gene, encoding ammonia monooxygenase subunit A (amoA) or 16S ribosomal RNA (rRNA) genes, which show phylogenetic coherence with respect to soil pH. To test phylogenetic congruence between these two markers and to determine ecological coherence in all Thaumarchaeota, we performed high-throughput sequencing of 16S rRNA and amoA genes in 46 UK soils presenting 29 available contextual soil characteristics. Adaptation to pH and organic matter content reflected strong ecological coherence at various levels of taxonomic resolution for Thaumarchaeota (AOA and non-AOA), whereas nitrogen, total mineralisable nitrogen and zinc concentration were also important factors associated with AOA thaumarchaeotal community distribution. Other significant associations with environmental factors were also detected for amoA and 16S rRNA genes, reflecting different diversity characteristics between these two markers. Nonetheless, there was significant statistical congruence between the markers at fine phylogenetic resolution, supporting the hypothesis of low horizontal gene transfer between Thaumarchaeota. Group 1.1c Thaumarchaeota were also widely distributed, with two clusters predominating, particularly in environments with higher moisture content and organic matter, whereas a similar ecological pattern was observed for Group 1.3 Thaumarchaeota. The ecological and phylogenetic congruence identified is fundamental to understand better the life strategies, evolutionary history and ecosystem function of the Thaumarchaeota.

  20. Phylogenetic congruence and ecological coherence in terrestrial Thaumarchaeota

    PubMed Central

    Oton, Eduard Vico; Quince, Christopher; Nicol, Graeme W; Prosser, James I; Gubry-Rangin, Cécile

    2016-01-01

    Thaumarchaeota form a ubiquitously distributed archaeal phylum, comprising both the ammonia-oxidising archaea (AOA) and other archaeal groups in which ammonia oxidation has not been demonstrated (including Group 1.1c and Group 1.3). The ecology of AOA in terrestrial environments has been extensively studied using either a functional gene, encoding ammonia monooxygenase subunit A (amoA) or 16S ribosomal RNA (rRNA) genes, which show phylogenetic coherence with respect to soil pH. To test phylogenetic congruence between these two markers and to determine ecological coherence in all Thaumarchaeota, we performed high-throughput sequencing of 16S rRNA and amoA genes in 46 UK soils presenting 29 available contextual soil characteristics. Adaptation to pH and organic matter content reflected strong ecological coherence at various levels of taxonomic resolution for Thaumarchaeota (AOA and non-AOA), whereas nitrogen, total mineralisable nitrogen and zinc concentration were also important factors associated with AOA thaumarchaeotal community distribution. Other significant associations with environmental factors were also detected for amoA and 16S rRNA genes, reflecting different diversity characteristics between these two markers. Nonetheless, there was significant statistical congruence between the markers at fine phylogenetic resolution, supporting the hypothesis of low horizontal gene transfer between Thaumarchaeota. Group 1.1c Thaumarchaeota were also widely distributed, with two clusters predominating, particularly in environments with higher moisture content and organic matter, whereas a similar ecological pattern was observed for Group 1.3 Thaumarchaeota. The ecological and phylogenetic congruence identified is fundamental to understand better the life strategies, evolutionary history and ecosystem function of the Thaumarchaeota. PMID:26140533

  1. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    PubMed

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the

  2. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services

    PubMed Central

    Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-01-01

    Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing

  3. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    PubMed

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  5. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  6. Experienced quality factors: qualitative evaluation approach to audiovisual quality

    NASA Astrophysics Data System (ADS)

    Jumisko-Pyykkö, Satu; Häkkinen, Jukka; Nyman, Göte

    2007-02-01

    Subjective evaluation is used to identify impairment factors of multimedia quality. The final quality is often formulated via quantitative experiments, but this approach has its constraints, as subject's quality interpretations, experiences and quality evaluation criteria are disregarded. To identify these quality evaluation factors, this study examined qualitatively the criteria participants used to evaluate audiovisual video quality. A semi-structured interview was conducted with 60 participants after a subjective audiovisual quality evaluation experiment. The assessment compared several, relatively low audio-video bitrate ratios with five different television contents on mobile device. In the analysis, methodological triangulation (grounded theory, Bayesian networks and correspondence analysis) was applied to approach the qualitative quality. The results showed that the most important evaluation criteria were the factors of visual quality, contents, factors of audio quality, usefulness - followability and audiovisual interaction. Several relations between the quality factors and the similarities between the contents were identified. As a research methodological recommendation, the focus on content and usage related factors need to be further examined to improve the quality evaluation experiments.

  7. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    PubMed

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P < .05). The audiovisual informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P < .05). These results suggested that informing patients of the treatment with an audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating

  8. Correlates of emotional congruence with children in sexual offenders against children: a test of theoretical models in an incarcerated sample.

    PubMed

    McPhail, Ian V; Hermann, Chantal A; Fernandez, Yolanda M

    2014-02-01

    Emotional congruence with children is a psychological construct theoretically involved in the etiology and maintenance of sexual offending against children. Research conducted to date has not examined the relationship between emotional congruence with children and other psychological meaningful risk factors for sexual offending against children. The current study derived potential correlates of emotional congruence with children from the published literature and proposed three models of emotional congruence with children that contain relatively unique sets of correlates: the blockage, sexual deviance, and psychological immaturity models. Using Area under the Curve analysis, we assessed the relationship between emotional congruence with children and offense characteristics, victim demographics, and psychologically meaningful risk factors in a sample of incarcerated sexual offenders against children (n=221). The sexual deviance model received the most support: emotional congruence with children was significantly associated with deviant sexual interests, sexual self-regulation problems, and cognition that condones and supports child molestation. The blockage model received partial support, and the immaturity model received the least support. Based on the results, we propose a set of further predictions regarding the relationships between emotional congruence with children and other psychologically meaningful risk factors to be examined in future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Berrios, Daniel C.; Carvalho, Robert E.; Hall, David R.; Rich, Stephen J.; Sturken, Ian B.; Swanson, Keith J.; Wolfe, Shawn R.

    2004-01-01

    SemanticOrganizer is a collaborative knowledge management system designed to support distributed NASA projects, including diverse teams of scientists, engineers, and accident investigators. The system provides a customizable, semantically structured information repository that stores work products relevant to multiple projects of differing types. SemanticOrganizer is one of the earliest and largest semantic web applications deployed at NASA to date, and has been used in diverse contexts ranging from the investigation of Space Shuttle Columbia's accident to the search for life on other planets. Although the underlying repository employs a single unified ontology, access control and ontology customization mechanisms make the repository contents appear different for each project team. This paper describes SemanticOrganizer, its customization facilities, and a sampling of its applications. The paper also summarizes some key lessons learned from building and fielding a successful semantic web application across a wide-ranging set of domains with diverse users.

  10. Aztec arithmetic revisited: land-area algorithms and Acolhua congruence arithmetic.

    PubMed

    Williams, Barbara J; Jorge y Jorge, María del Carmen

    2008-04-04

    Acolhua-Aztec land records depicting areas and side dimensions of agricultural fields provide insight into Aztec arithmetic. Hypothesizing that recorded areas resulted from indigenous calculation, in a study of sample quadrilateral fields we found that 60% of the area values could be reproduced exactly by computation. In remaining cases, discrepancies between computed and recorded areas were consistently small, suggesting use of an unknown indigenous arithmetic. In revisiting the research, we discovered evidence for the use of congruence principles, based on proportions between the standard linear Acolhua measure and their units of shorter length. This procedure substitutes for computation with fractions and is labeled "Acolhua congruence arithmetic." The findings also clarify variance between Acolhua and Tenochca linear units, long an issue in understanding Aztec metrology.

  11. Semantic Typicality Effects in Acquired Dyslexia: Evidence for Semantic Impairment in Deep Dyslexia.

    PubMed

    Riley, Ellyn A; Thompson, Cynthia K

    2010-06-01

    BACKGROUND: Acquired deep dyslexia is characterized by impairment in grapheme-phoneme conversion and production of semantic errors in oral reading. Several theories have attempted to explain the production of semantic errors in deep dyslexia, some proposing that they arise from impairments in both grapheme-phoneme and lexical-semantic processing, and others proposing that such errors stem from a deficit in phonological production. Whereas both views have gained some acceptance, the limited evidence available does not clearly eliminate the possibility that semantic errors arise from a lexical-semantic input processing deficit. AIMS: To investigate semantic processing in deep dyslexia, this study examined the typicality effect in deep dyslexic individuals, phonological dyslexic individuals, and controls using an online category verification paradigm. This task requires explicit semantic access without speech production, focusing observation on semantic processing from written or spoken input. METHODS #ENTITYSTARTX00026; PROCEDURES: To examine the locus of semantic impairment, the task was administered in visual and auditory modalities with reaction time as the primary dependent measure. Nine controls, six phonological dyslexic participants, and five deep dyslexic participants completed the study. OUTCOMES #ENTITYSTARTX00026; RESULTS: Controls and phonological dyslexic participants demonstrated a typicality effect in both modalities, while deep dyslexic participants did not demonstrate a typicality effect in either modality. CONCLUSIONS: These findings suggest that deep dyslexia is associated with a semantic processing deficit. Although this does not rule out the possibility of concomitant deficits in other modules of lexical-semantic processing, this finding suggests a direction for treatment of deep dyslexia focused on semantic processing.

  12. Varieties of semantic ‘access’ deficit in Wernicke’s aphasia and semantic aphasia

    PubMed Central

    Robson, Holly; Lambon Ralph, Matthew A.; Jefferies, Elizabeth

    2015-01-01

    Comprehension deficits are common in stroke aphasia, including in cases with (i) semantic aphasia, characterized by poor executive control of semantic processing across verbal and non-verbal modalities; and (ii) Wernicke’s aphasia, associated with poor auditory–verbal comprehension and repetition, plus fluent speech with jargon. However, the varieties of these comprehension problems, and their underlying causes, are not well understood. Both patient groups exhibit some type of semantic ‘access’ deficit, as opposed to the ‘storage’ deficits observed in semantic dementia. Nevertheless, existing descriptions suggest that these patients might have different varieties of ‘access’ impairment—related to difficulty resolving competition (in semantic aphasia) versus initial activation of concepts from sensory inputs (in Wernicke’s aphasia). We used a case series design to compare patients with Wernicke’s aphasia and those with semantic aphasia on Warrington’s paradigmatic assessment of semantic ‘access’ deficits. In these verbal and non-verbal matching tasks, a small set of semantically-related items are repeatedly presented over several cycles so that the target on one trial becomes a distractor on another (building up interference and eliciting semantic ‘blocking’ effects). Patients with Wernicke’s aphasia and semantic aphasia were distinguished according to lesion location in the temporal cortex, but in each group, some individuals had additional prefrontal damage. Both of these aspects of lesion variability—one that mapped onto classical ‘syndromes’ and one that did not—predicted aspects of the semantic ‘access’ deficit. Both semantic aphasia and Wernicke’s aphasia cases showed multimodal semantic impairment, although as expected, the Wernicke’s aphasia group showed greater deficits on auditory-verbal than picture judgements. Distribution of damage in the temporal lobe was crucial for predicting the initially

  13. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    PubMed

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Evaluative priming of naming and semantic categorization responses revisited: a mutual facilitation explanation.

    PubMed

    Schmitz, Melanie; Wentura, Dirk

    2012-07-01

    The evaluative priming effect (i.e., faster target responses following evaluatively congruent compared with evaluatively incongruent primes) in nonevaluative priming tasks (such as naming or semantic categorization tasks) is considered important for the question of how evaluative connotations are represented in memory. However, the empirical evidence is rather ambiguous: Positive effects as well as null results and negatively signed effects have been found. We tested the assumption that different processes are responsible for these results. In particular, we argue that positive effects are due to target-encoding facilitation (caused by a congruent prime), while negative effects are due to prime-activation maintenance (caused by a congruent target) and subsequent response conflict. In 4 experiments, we used a negative prime-target stimulus-onset asynchrony (SOA) to minimize target-encoding facilitation and maximize prime maintenance. In a naming task (Experiment 1), we found a negatively signed evaluative priming effect if prime and target competed for naming responses. In a semantic categorization task (i.e., person vs. animal; Experiments 2 and 3), response conflicts between prime and target were significantly larger in case of evaluative congruence compared with incongruence. These results corroborate the theory that a prime has more potential to interfere with the target response if its activation is maintained by an evaluatively congruent target. Experiment 4a/b indicated valence specificity of the effect. Implications for the memory representation of valence are discussed. 2012 APA, all rights reserved

  15. Evaluation of audiovisual teaching material in family practice: a report of review activities, 1977--1978.

    PubMed

    Geyman, J P

    1979-05-01

    Audiovisual teaching materials have found increasing use in medical education in recent years, and a large number of excellent materials have been produced. The plethora of existing audiovisual teaching programs has made it difficult for educators and potential users to be aware of what is available and to select programs relevant to specific learning needs. The Audiovisual Review Committee has functioned over the last five years as a subcommittee of the Education Committee of the Society of Teachers of Family Medicine. This paper describes the experience of this group over the last two years and presents a complete listing of audiovisual teaching materials which have been reviewed and appraised during that period.

  16. Why It Is Too Early to Lose Control in Accounts of Item-Specific Proportion Congruency Effects

    ERIC Educational Resources Information Center

    Bugg, Julie M.; Jacoby, Larry L.; Chanani, Swati

    2011-01-01

    The item-specific proportion congruency (ISPC) effect is the finding of attenuated interference for mostly incongruent as compared to mostly congruent items. A debate in the Stroop literature concerns the mechanisms underlying this effect. Noting a confound between proportion congruency and contingency, Schmidt and Besner (2008) suggested that…

  17. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  18. Do men and their wives see it the same way? Congruence within couples during the first year of prostate cancer.

    PubMed

    Ezer, Hélène; Chachamovich, Juliana L Rigol; Chachamovich, Eduardo

    2011-02-01

    The purpose of this study was to determine the psychosocial adjustment congruence within couples through the first year of prostate cancer experience, and to explore the personal variables that could predict congruence within couples. Eighty-one couples were interviewed at the time of diagnosis; 69 participated at 3 months and 61 at 12 months. Paired t-tests were used to examine dyadic congruence on seven domains of psychosocial adjustment. Repeated Measures ANOVAs were used to examine the congruence over time. Multiple regressions were used to determine whether mood disturbance, urinary and sexual bother, sense of coherence, and social support were predictors of congruence within couples on each of the adjustment domains. At time 1, couples had incongruent perceptions in 3 of 7 domains: health care, psychological, and social adjustment. Three months later, health care, psychological, and sexual domains showed incongruence within couples. One year after the diagnosis, there were incongruent perceptions only in sexual and psychological domains. There was little variation of the congruence within couples over time. Husbands and wives' mood disturbance, urinary and sexual bother, sense of coherence, and social support accounted for 25-63% of variance in couple congruence in the adjustment domains in the study periods. The findings suggested that there is couple congruence. Domains in which incongruence was observed are important targets for clinical interventions. Greater attention needs to be directed to assisting couples to recognize the differences between their perceptions, especially the ones related to the sexual symptoms and psychological distress. Copyright © 2010 John Wiley & Sons, Ltd.

  19. Getting connected: Both associative and semantic links structure semantic memory for newly learned persons.

    PubMed

    Wiese, Holger; Schweinberger, Stefan R

    2015-01-01

    The present study examined whether semantic memory for newly learned people is structured by visual co-occurrence, shared semantics, or both. Participants were trained with pairs of simultaneously presented (i.e., co-occurring) preexperimentally unfamiliar faces, which either did or did not share additionally provided semantic information (occupation, place of living, etc.). Semantic information could also be shared between faces that did not co-occur. A subsequent priming experiment revealed faster responses for both co-occurrence/no shared semantics and no co-occurrence/shared semantics conditions, than for an unrelated condition. Strikingly, priming was strongest in the co-occurrence/shared semantics condition, suggesting additive effects of these factors. Additional analysis of event-related brain potentials yielded priming in the N400 component only for combined effects of visual co-occurrence and shared semantics, with more positive amplitudes in this than in the unrelated condition. Overall, these findings suggest that both semantic relatedness and visual co-occurrence are important when novel information is integrated into person-related semantic memory.

  20. Semantically Interoperable XML Data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-01-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  1. Improving Students' Attitudes toward Science Using Instructional Congruence

    ERIC Educational Resources Information Center

    Zain, Ahmad Nurulazam Md; Samsudin, Mohd Ali; Rohandi, Robertus; Jusoh, Azman

    2010-01-01

    The objective of this study was to improve students' attitudes toward science using instructional congruence. The study was conducted in Malaysia, in three low-performing secondary schools in the state of Penang. Data collected with an Attitudes in Science instrument were analysed using Rasch modeling. Qualitative data based on the reflections of…

  2. Olfactory Context-Dependent Memory and the Effects of Affective Congruency.

    PubMed

    Hackländer, Ryan P M; Bermeitinger, Christina

    2017-10-31

    Odors have been claimed to be particularly effective mnemonic cues, possibly because of the strong links between olfaction and emotion processing. Indeed, past research has shown that odors can bias processing towards affectively congruent material. In order to determine whether this processing bias translates to memory, we conducted 2 olfactory-enhanced-context memory experiments where we manipulated affective congruency between the olfactory context and to-be-remembered material. Given the presumed importance of valence to olfactory perception, we hypothesized that memory would be best for affectively congruent material in the olfactory enhanced context groups. Across the 2 experiments, groups which encoded and retrieved material in the presence of an odorant exhibited better memory performance than groups that did not have the added olfactory context during encoding and retrieval. While context-enhanced memory was exhibited in the presence of both pleasant and unpleasant odors, there was no indication that memory was dependent on affective congruency between the olfactory context and the to-be-remembered material. While the results provide further support for the notion that odors can act as powerful contextual mnemonic cues, they call into question the notion that affective congruency between context and focal material is important for later memory performance. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Representations for Semantic Learning Webs: Semantic Web Technology in Learning Support

    ERIC Educational Resources Information Center

    Dzbor, M.; Stutt, A.; Motta, E.; Collins, T.

    2007-01-01

    Recent work on applying semantic technologies to learning has concentrated on providing novel means of accessing and making use of learning objects. However, this is unnecessarily limiting: semantic technologies will make it possible to develop a range of educational Semantic Web services, such as interpretation, structure-visualization, support…

  4. Person-Environment Congruence: A Response to the Moos Perspective.

    ERIC Educational Resources Information Center

    Walsh, W. Bruce

    1987-01-01

    Describes Moos' conceptual framework which delineates how perceptions of collective human environments (social climates) influence behavior, with congruence achieved when individuals adapt their preferences in selecting environments, as underemphasizing the role of individual variables and the actual environment. Advocates continual analysis of…

  5. Emotional congruence between clients and therapists and its effect on treatment outcome.

    PubMed

    Atzil-Slonim, Dana; Bar-Kalifa, Eran; Fisher, Hadar; Peri, Tuvia; Lutz, Wolfgang; Rubel, Julian; Rafaeli, Eshkol

    2018-01-01

    The present study aimed to (a) explore 2 indices of emotional congruence-temporal similarity and directional discrepancy-between clients' and therapists' ratings of their emotions as they cofluctuate session-by-session; and (b) examine whether client/therapist emotional congruence predicts clients' symptom relief and improved functioning. The sample comprised 109 clients treated by 62 therapists in a university setting. Clients and therapists self-reported their negative (NE) and positive emotions (PE) after each session. Symptom severity and functioning level were assessed at the beginning of each session using the clients' self-reports. To assess emotional congruence, an adaptation of West and Kenny's (2011) Truth and Bias model was applied. To examine the consequences of emotional congruence, polynomial regression, and response surface analyses were conducted (Edwards & Parry, 1993). Clients and therapists were temporally similar in both PE and NE. Therapists experienced less intense PE on average, but did not experience more or less intense NE than their clients. Those therapists who experienced more intense NE than their clients were more temporally similar in their emotions to their clients. Therapist/client incongruence in both PE and NE predicted poorer next-session symptomatology; incongruence in PE was also associated with lower client next-session functioning. Session-level symptoms were better when therapists experienced more intense emotions (both PE and NE) than their clients. The findings highlight the importance of recognizing the dynamic nature of emotions in client-therapist interactions and the contribution of session-by-session emotional dynamics to outcomes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  7. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  8. Explaining semantic short-term memory deficits: Evidence for the critical role of semantic control

    PubMed Central

    Hoffman, Paul; Jefferies, Elizabeth; Lambon Ralph, Matthew A.

    2011-01-01

    Patients with apparently selective short-term memory (STM) deficits for semantic information have played an important role in developing multi-store theories of STM and challenge the idea that verbal STM is supported by maintaining activation in the language system. We propose that semantic STM deficits are not as selective as previously thought and can occur as a result of mild disruption to semantic control processes, i.e., mechanisms that bias semantic processing towards task-relevant aspects of knowledge and away from irrelevant information. We tested three semantic STM patients with tasks that tapped four aspects of semantic control: (i) resolving ambiguity between word meanings, (ii) sensitivity to cues, (iii) ignoring irrelevant information and (iv) detecting weak semantic associations. All were impaired in conditions requiring more semantic control, irrespective of the STM demands of the task, suggesting a mild, but task-general, deficit in regulating semantic knowledge. This mild deficit has a disproportionate effect on STM tasks because they have high intrinsic control demands: in STM tasks, control is required to keep information active when it is no longer available in the environment and to manage competition between items held in memory simultaneously. By re-interpreting the core deficit in semantic STM patients in this way, we are able to explain their apparently selective impairment without the need for a specialised STM store. Instead, we argue that semantic STM patients occupy the mildest end of spectrum of semantic control disorders. PMID:21195105

  9. Semantic memory in object use.

    PubMed

    Silveri, Maria Caterina; Ciccarelli, Nicoletta

    2009-10-01

    We studied five patients with semantic memory disorders, four with semantic dementia and one with herpes simplex virus encephalitis, to investigate the involvement of semantic conceptual knowledge in object use. Comparisons between patients who had semantic deficits of different severity, as well as the follow-up, showed that the ability to use objects was largely preserved when the deficit was mild but progressively decayed as the deficit became more severe. Naming was generally more impaired than object use. Production tasks (pantomime execution and actual object use) and comprehension tasks (pantomime recognition and action recognition) as well as functional knowledge about objects were impaired when the semantic deficit was severe. Semantic and unrelated errors were produced during object use, but actions were always fluent and patients performed normally on a novel tools task in which the semantic demand was minimal. Patients with severe semantic deficits scored borderline on ideational apraxia tasks. Our data indicate that functional semantic knowledge is crucial for using objects in a conventional way and suggest that non-semantic factors, mainly non-declarative components of memory, might compensate to some extent for semantic disorders and guarantee some residual ability to use very common objects independently of semantic knowledge.

  10. Disentangling effects of abiotic factors and biotic interactions on cross-taxon congruence in species turnover patterns of plants, moths and beetles.

    PubMed

    Duan, Meichun; Liu, Yunhui; Yu, Zhenrong; Baudry, Jacques; Li, Liangtao; Wang, Changliu; Axmacher, Jan C

    2016-04-01

    High cross-taxon congruence in species diversity patterns is essential for the use of surrogate taxa in biodiversity conservation, but presence and strength of congruence in species turnover patterns, and the relative contributions of abiotic environmental factors and biotic interaction towards this congruence, remain poorly understood. In our study, we used variation partitioning in multiple regressions to quantify cross-taxon congruence in community dissimilarities of vascular plants, geometrid and arciinid moths and carabid beetles, subsequently investigating their respective underpinning by abiotic factors and biotic interactions. Significant cross-taxon congruence observed across all taxon pairs was linked to their similar responses towards elevation change. Changes in the vegetation composition were closely linked to carabid turnover, with vegetation structure and associated microclimatic conditions proposed causes of this link. In contrast, moth assemblages appeared to be dominated by generalist species whose turnover was weakly associated with vegetation changes. Overall, abiotic factors exerted a stronger influence on cross-taxon congruence across our study sites than biotic interactions. The weak congruence in turnover observed particularly between plants and moths highlights the importance of multi-taxon approaches based on groupings of taxa with similar turnovers, rather than the use of single surrogate taxa or environmental proxies, in biodiversity assessments.

  11. Disentangling effects of abiotic factors and biotic interactions on cross-taxon congruence in species turnover patterns of plants, moths and beetles

    PubMed Central

    Duan, Meichun; Liu, Yunhui; Yu, Zhenrong; Baudry, Jacques; Li, Liangtao; Wang, Changliu; Axmacher, Jan C.

    2016-01-01

    High cross-taxon congruence in species diversity patterns is essential for the use of surrogate taxa in biodiversity conservation, but presence and strength of congruence in species turnover patterns, and the relative contributions of abiotic environmental factors and biotic interaction towards this congruence, remain poorly understood. In our study, we used variation partitioning in multiple regressions to quantify cross-taxon congruence in community dissimilarities of vascular plants, geometrid and arciinid moths and carabid beetles, subsequently investigating their respective underpinning by abiotic factors and biotic interactions. Significant cross-taxon congruence observed across all taxon pairs was linked to their similar responses towards elevation change. Changes in the vegetation composition were closely linked to carabid turnover, with vegetation structure and associated microclimatic conditions proposed causes of this link. In contrast, moth assemblages appeared to be dominated by generalist species whose turnover was weakly associated with vegetation changes. Overall, abiotic factors exerted a stronger influence on cross-taxon congruence across our study sites than biotic interactions. The weak congruence in turnover observed particularly between plants and moths highlights the importance of multi-taxon approaches based on groupings of taxa with similar turnovers, rather than the use of single surrogate taxa or environmental proxies, in biodiversity assessments. PMID:27032533

  12. Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132

  13. Selective attention modulates the direction of audio-visual temporal recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  14. An Analysis of Audiovisual Machines for Individual Program Presentation. Research Memorandum Number Two.

    ERIC Educational Resources Information Center

    Finn, James D.; Weintraub, Royd

    The Medical Information Project (MIP) purpose to select the right type of audiovisual equipment for communicating new medical information to general practitioners of medicine was hampered by numerous difficulties. There is a lack of uniformity and standardization in audiovisual equipment that amounts to chaos. There is no evaluative literature on…

  15. Behavioral Science Design for Audio-Visual Software Development

    ERIC Educational Resources Information Center

    Foster, Dennis L.

    1974-01-01

    A discussion of the basic structure of the behavioral audio-visual production which consists of objectives analysis, approach determination, technical production, fulfillment evaluation, program refinement, implementation, and follow-up. (Author)

  16. Congruences for central factorial numbers modulo powers of prime.

    PubMed

    Wang, Haiqing; Liu, Guodong

    2016-01-01

    Central factorial numbers are more closely related to the Stirling numbers than the other well-known special numbers, and they play a major role in a variety of branches of mathematics. In the present paper we prove some interesting congruences for central factorial numbers.

  17. The structure of semantic person memory: evidence from semantic priming in person recognition.

    PubMed

    Wiese, Holger

    2011-11-01

    This paper reviews research on the structure of semantic person memory as examined with semantic priming. In this experimental paradigm, a familiarity decision on a target face or written name is usually faster when it is preceded by a related as compared to an unrelated prime. This effect has been shown to be relatively short lived and susceptible to interfering items. Moreover, semantic priming can cross stimulus domains, such that a written name can prime a target face and vice versa. However, it remains controversial whether representations of people are stored in associative networks based on co-occurrence, or in more abstract semantic categories. In line with prominent cognitive models of face recognition, which explain semantic priming by shared semantic information between prime and target, recent research demonstrated that priming could be obtained from purely categorically related, non-associated prime/target pairs. Although strategic processes, such as expectancy and retrospective matching likely contribute, there is also evidence for a non-strategic contribution to priming, presumably related to spreading activation. Finally, a semantic priming effect has been demonstrated in the N400 event-related potential (ERP) component, which may reflect facilitated access to semantic information. It is concluded that categorical relatedness is one organizing principle of semantic person memory. ©2011 The British Psychological Society.

  18. Comparison between audio-only and audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy.

    PubMed

    Yu, Jesang; Choi, Ji Hoon; Ma, Sun Young; Jeung, Tae Sig; Lim, Sangwook

    2015-09-01

    To compare audio-only biofeedback to conventional audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy, limiting damage to healthy surrounding tissues caused by organ movement. Six healthy volunteers were assisted by audiovisual or audio-only biofeedback systems to regulate their respirations. Volunteers breathed through a mask developed for this study by following computer-generated guiding curves displayed on a screen, combined with instructional sounds. They then performed breathing following instructional sounds only. The guiding signals and the volunteers' respiratory signals were logged at 20 samples per second. The standard deviations between the guiding and respiratory curves for the audiovisual and audio-only biofeedback systems were 21.55% and 23.19%, respectively; the average correlation coefficients were 0.9778 and 0.9756, respectively. The regularities between audiovisual and audio-only biofeedback for six volunteers' respirations were same statistically from the paired t-test. The difference between the audiovisual and audio-only biofeedback methods was not significant. Audio-only biofeedback has many advantages, as patients do not require a mask and can quickly adapt to this method in the clinic.

  19. Audiovisual video eyeglass distraction during dental treatment in children.

    PubMed

    Ram, Diana; Shapira, Joseph; Holan, Gideon; Magora, Florella; Cohen, Sarale; Davidovich, Esti

    2010-09-01

    To investigate the effect of audiovisual distraction (AVD) with video eyeglasses on the behavior of children undergoing dental restorative treatment and the satisfaction with this treatment as reported by children, parents, dental students, and experienced pediatric dentists. During restorative dental treatment, 61 children wore wireless audiovisual eyeglasses with earphones, and 59 received dental treatment under nitrous oxide sedation. A Frankl behavior rating score was assigned to each child. After each treatment, a Houpt behavior rating score was recorded by an independent observer. A visual analogue scale (VAS) score was obtained from children who wore AVD eyeglasses, their parents, and the clinician. General behavior during the AVD sessions, as rated by the Houpt scales, was excellent (rating 6) for 70% of the children, very good (rating 5) for 19%, good (rating 4) for 6%, and fair, poor, or aborted for only 5%. VAS scores showed 85% of the children, including those with poor Frankl ratings, to be satisfied with the AVD eyeglasses. Satisfaction of parents and clinicians was also high. Audiovisual eyeglasses offer an effective distraction tool for the alleviation of the unpleasantness and distress that arises during dental restorative procedures.

  20. Effects of Working Memory Span on Processing of Lexical Associations and Congruence in Spoken Discourse

    PubMed Central

    Boudewyn, Megan A.; Long, Debra L.; Swaab, Tamara Y.

    2013-01-01

    The goal of this study was to determine whether variability in working memory (WM) capacity and cognitive control affects the processing of global discourse congruence and local associations among words when participants listened to short discourse passages. The final, critical word of each passage was either associated or unassociated with a preceding prime word (e.g., “He was not prepared for the fame and fortune/praise”). These critical words were also either congruent or incongruent with respect to the preceding discourse context [e.g., a context in which a prestigious prize was won (congruent) or in which the protagonist had been arrested (incongruent)]. We used multiple regression to assess the unique contribution of suppression ability (our measure of cognitive control) and WM capacity on the amplitude of individual N400 effects of congruence and association. Our measure of suppression ability did not predict the size of the N400 effects of association or congruence. However, as expected, the results showed that high WM capacity individuals were less sensitive to the presence of lexical associations (showed smaller N400 association effects). Furthermore, differences in WM capacity were related to differences in the topographic distribution of the N400 effects of discourse congruence. The topographic differences in the global congruence effects indicate differences in the underlying neural generators of the N400 effects, as a function of WM. This suggests additional, or at a minimum, distinct, processing on the part of higher capacity individuals when tasked with integrating incoming words into the developing discourse representation. PMID:23407753

  1. Effects of working memory span on processing of lexical associations and congruence in spoken discourse.

    PubMed

    Boudewyn, Megan A; Long, Debra L; Swaab, Tamara Y

    2013-01-01

    The goal of this study was to determine whether variability in working memory (WM) capacity and cognitive control affects the processing of global discourse congruence and local associations among words when participants listened to short discourse passages. The final, critical word of each passage was either associated or unassociated with a preceding prime word (e.g., "He was not prepared for the fame and fortune/praise"). These critical words were also either congruent or incongruent with respect to the preceding discourse context [e.g., a context in which a prestigious prize was won (congruent) or in which the protagonist had been arrested (incongruent)]. We used multiple regression to assess the unique contribution of suppression ability (our measure of cognitive control) and WM capacity on the amplitude of individual N400 effects of congruence and association. Our measure of suppression ability did not predict the size of the N400 effects of association or congruence. However, as expected, the results showed that high WM capacity individuals were less sensitive to the presence of lexical associations (showed smaller N400 association effects). Furthermore, differences in WM capacity were related to differences in the topographic distribution of the N400 effects of discourse congruence. The topographic differences in the global congruence effects indicate differences in the underlying neural generators of the N400 effects, as a function of WM. This suggests additional, or at a minimum, distinct, processing on the part of higher capacity individuals when tasked with integrating incoming words into the developing discourse representation.

  2. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    PubMed

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. SSWAP: A Simple Semantic Web Architecture and Protocol for Semantic Web Services

    USDA-ARS?s Scientific Manuscript database

    SSWAP (Simple Semantic Web Architecture and Protocol) is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP is the driving technology behind the Virtual Plant Information Network, an NSF-funded semantic w...

  4. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same; Commission Determination To Review a Final Initial Determination Finding a... section 337 as to certain audiovisual components and products containing the same with respect to claims 1...

  5. MMPI--2 Code-Type Congruence of Injured Workers

    ERIC Educational Resources Information Center

    Livingston, Ronald B.; Jennings, Earl; Colotla, Victor A.; Reynolds, Cecil R.; Shercliffe, Regan J.

    2006-01-01

    In this study, the authors examined the stability of Minnesota Multiphasic Personality Inventory--2 (J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) code types in a sample of 94 injured workers with a mean test-retest interval of 21.3 months (SD = 14.1). Congruence rates for undefined code types were 34% for…

  6. [Schizophrenia and semantic priming effects].

    PubMed

    Lecardeur, L; Giffard, B; Eustache, F; Dollfus, S

    2006-01-01

    This article is a review of studies using the semantic priming paradigm to assess the functioning of semantic memory in schizophrenic patients. Semantic priming describes the phenomenon of increasing the speed with which a string of letters (the target) is recognized as a word (lexical decision task) by presenting to the subject a semantically related word (the prime) prior to the appearance of the target word. This semantic priming is linked to both automatic and controlled processes depending on experimental conditions (stimulus onset asynchrony (SOA), percentage of related words and explicit memory instructions). Automatic process observed with short SOA, low related word percentage and instructions asking only to process the target, could be linked to the "automatic spreading activation" through the semantic network. Controlled processes involve "semantic matching" (the number of related and unrelated pairs influences the subjects decision) and "expectancy" (the prime leads the subject to generate an expectancy set of potential target to the prime). These processes can be observed whatever the SOA for the former and with long SOA for the later, but both with only high related word percentage and explicit memory instructions. Studies evaluating semantic priming effects in schizophrenia show conflicting results: schizophrenic patients can present hyperpriming (semantic priming effect is larger in patients than in controls), hypopriming (semantic priming effect is lower in patients than in controls) or equal semantic priming effects compared to control subjects. These results could be associated to a global impairment of controlled processes in schizophrenia, essentially to a dysfunction of semantic matching process. On the other hand, efficiency of semantic automatic spreading activation process is controversial. These discrepancies could be linked to the different experimental conditions used (duration of SOA, proportion of related pairs and instructions), which

  7. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  8. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    PubMed

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that

  9. Audiovisual distraction for pain relief in paediatric inpatients: A crossover study.

    PubMed

    Oliveira, N C A C; Santos, J L F; Linhares, M B M

    2017-01-01

    Pain is a stressful experience that can have a negative impact on child development. The aim of this crossover study was to examine the efficacy of audiovisual distraction for acute pain relief in paediatric inpatients. The sample comprised 40 inpatients (6-11 years) who underwent painful puncture procedures. The participants were randomized into two groups, and all children received the intervention and served as their own controls. Stress and pain-catastrophizing assessments were initially performed using the Child Stress Scale and Pain Catastrophizing Scale for Children, with the aim of controlling these variables. The pain assessment was performed using a Visual Analog Scale and the Faces Pain Scale-Revised after the painful procedures. Group 1 received audiovisual distraction before and during the puncture procedure, which was performed again without intervention on another day. The procedure was reversed in Group 2. Audiovisual distraction used animated short films. A 2 × 2 × 2 analysis of variance for 2 × 2 crossover study was performed, with a 5% level of statistical significance. The two groups had similar baseline measures of stress and pain catastrophizing. A significant difference was found between periods with and without distraction in both groups, in which scores on both pain scales were lower during distraction compared with no intervention. The sequence of exposure to the distraction intervention in both groups and first versus second painful procedure during which the distraction was performed also significantly influenced the efficacy of the distraction intervention. Audiovisual distraction effectively reduced the intensity of pain perception in paediatric inpatients. The crossover study design provides a better understanding of the power effects of distraction for acute pain management. Audiovisual distraction was a powerful and effective non-pharmacological intervention for pain relief in paediatric inpatients. The effects were

  10. Establishment of neurovascular congruency in the mouse whisker system by an independent patterning mechanism.

    PubMed

    Oh, Won-Jong; Gu, Chenghua

    2013-10-16

    Nerves and vessels often run parallel to one another, a phenomenon that reflects their functional interdependency. Previous studies have suggested that neurovascular congruency in planar tissues such as skin is established through a "one-patterns-the-other" model, in which either the nervous system or the vascular system precedes developmentally and then instructs the other system to form using its established architecture as a template. Here, we find that, in tissues with complex three-dimensional structures such as the mouse whisker system, neurovascular congruency does not follow the previous model but rather is established via a mechanism in which nerves and vessels are patterned independently. Given the diversity of neurovascular structures in different tissues, guidance signals emanating from a central organizer in the specific target tissue may act as an important mechanism to establish neurovascular congruency patterns that facilitate unique target tissue function. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Verbal and non-verbal semantic impairment: From fluent primary progressive aphasia to semantic dementia

    PubMed Central

    Senaha, Mirna Lie Hosogi; Caramelli, Paulo; Porto, Claudia Sellitto; Nitrini, Ricardo

    2007-01-01

    Selective disturbances of semantic memory have attracted the interest of many investigators and the question of the existence of single or multiple semantic systems remains a very controversial theme in the literature. Objectives To discuss the question of multiple semantic systems based on a longitudinal study of a patient who presented semantic dementia from fluent primary progressive aphasia. Methods A 66 year-old woman with selective impairment of semantic memory was examined on two occasions, undergoing neuropsychological and language evaluations, the results of which were compared to those of three paired control individuals. Results In the first evaluation, physical examination was normal and the score on the Mini-Mental State Examination was 26. Language evaluation revealed fluent speech, anomia, disturbance in word comprehension, preservation of the syntactic and phonological aspects of the language, besides surface dyslexia and dysgraphia. Autobiographical and episodic memories were relatively preserved. In semantic memory tests, the following dissociation was found: disturbance of verbal semantic memory with preservation of non-verbal semantic memory. Magnetic resonance of the brain revealed marked atrophy of the left anterior temporal lobe. After 14 months, the difficulties in verbal semantic memory had become more severe and the semantic disturbance, limited initially to the linguistic sphere, had worsened to involve non-verbal domains. Conclusions Given the dissociation found in the first examination, we believe there is sufficient clinical evidence to refute the existence of a unitary semantic system. PMID:29213389

  12. Vicarious audiovisual learning in perfusion education.

    PubMed

    Rath, Thomas E; Holt, David W

    2010-12-01

    Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video.These modules described the setup and operation of the MAQUET ROTAFLOW stand-alone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today's perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p < .05). The same was true for test #2 where video learners (n = 10) had an average score of 77% while text learners (n = 9) scored 60% (p < .05). Survey results indicated video learners were more satisfied with their learning module than text learners. Vicarious audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important role in how we

  13. Commensurate distances and similar motifs in genetic congruence and protein interaction networks in yeast

    PubMed Central

    Ye, Ping; Peyser, Brian D; Spencer, Forrest A; Bader, Joel S

    2005-01-01

    Background In a genetic interaction, the phenotype of a double mutant differs from the combined phenotypes of the underlying single mutants. When the single mutants have no growth defect, but the double mutant is lethal or exhibits slow growth, the interaction is termed synthetic lethality or synthetic fitness. These genetic interactions reveal gene redundancy and compensating pathways. Recently available large-scale data sets of genetic interactions and protein interactions in Saccharomyces cerevisiae provide a unique opportunity to elucidate the topological structure of biological pathways and how genes function in these pathways. Results We have defined congruent genes as pairs of genes with similar sets of genetic interaction partners and constructed a genetic congruence network by linking congruent genes. By comparing path lengths in three types of networks (genetic interaction, genetic congruence, and protein interaction), we discovered that high genetic congruence not only exhibits correlation with direct protein interaction linkage but also exhibits commensurate distance with the protein interaction network. However, consistent distances were not observed between genetic and protein interaction networks. We also demonstrated that congruence and protein networks are enriched with motifs that indicate network transitivity, while the genetic network has both transitive (triangle) and intransitive (square) types of motifs. These results suggest that robustness of yeast cells to gene deletions is due in part to two complementary pathways (square motif) or three complementary pathways, any two of which are required for viability (triangle motif). Conclusion Genetic congruence is superior to genetic interaction in prediction of protein interactions and function associations. Genetically interacting pairs usually belong to parallel compensatory pathways, which can generate transitive motifs (any two of three pathways needed) or intransitive motifs (either of two

  14. Audio-Visual Speech Perception Is Special

    ERIC Educational Resources Information Center

    Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.

    2005-01-01

    In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…

  15. Automated social skills training with audiovisual information.

    PubMed

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  16. High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan

    2010-10-04

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less

  17. Interest Congruency as a Moderator of the Relationships between Job Tenure and Job Satisfaction and Mental Health

    ERIC Educational Resources Information Center

    Klein, Kenneth; Wiener, Yoash

    1977-01-01

    In a sample of 54 middle managers, significant moderator effects were found for the mental health indices of self-esteem, life-satisfaction, and overall mental health and for satisfaction with supervision. These indices correlated positively with job tenure for high congruency individuals. For low congruency individuals, the obtained correlations…

  18. Comparison of audio and audiovisual measures of adult stuttering: Implications for clinical trials.

    PubMed

    O'Brian, Sue; Jones, Mark; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2015-04-15

    This study investigated whether measures of percentage syllables stuttered (%SS) and stuttering severity ratings with a 9-point scale differ when made from audiovisual compared with audio-only recordings. Four experienced speech-language pathologists measured %SS and assigned stuttering severity ratings to 10-minute audiovisual and audio-only recordings of 36 adults. There was a mean 18% increase in %SS scores when samples were presented in audiovisual compared with audio-only mode. This result was consistent across both higher and lower %SS scores and was found to be directly attributable to counts of stuttered syllables rather than the total number of syllables. There was no significant difference between stuttering severity ratings made from the two modes. In clinical trials research, when using %SS as the primary outcome measure, audiovisual samples would be preferred as long as clear, good quality, front-on images can be easily captured. Alternatively, stuttering severity ratings may be a more valid measure to use as they correlate well with %SS and values are not influenced by the presentation mode.

  19. From Data to Semantic Information

    NASA Astrophysics Data System (ADS)

    Floridi, Luciano

    2003-06-01

    There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information (SDI) as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on semantic information, and misinformation (that is, false semantic information) is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates the important implications of the revised definition for the analysis of the deflationary theories of truth, the standard definition of knowledge and the classic, quantitative theory of semantic information.

  20. Children with a history of SLI show reduced sensitivity to audiovisual temporal asynchrony: an ERP study.

    PubMed

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B; Gustafson, Dana; Macias, Danielle

    2014-08-01

    The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Fifteen H-SLI children, 15 TD children, and 15 adults judged whether a flashed explosion-shaped figure and a 2-kHz pure tone occurred simultaneously. The stimuli were presented at 0-, 100-, 200-, 300-, 400-, and 500-ms temporal offsets. This task was combined with EEG recordings. H-SLI children were profoundly less sensitive to temporal separations between auditory and visual modalities compared with their TD peers. Those H-SLI children who performed better at simultaneity judgment also had higher language aptitude. TD children were less accurate than adults, revealing a remarkably prolonged developmental course of the audiovisual temporal discrimination. Analysis of early event-related potential components suggested that poor sensory encoding was not a key factor in H-SLI children's reduced sensitivity to audiovisual asynchrony. Audiovisual temporal discrimination is impaired in H-SLI children and is still immature during mid-childhood in TD children. The present findings highlight the need for further evaluation of the role of atypical audiovisual processing in the development of SLI.