Science.gov

Sample records for audiovisual semantic congruency

  1. School-aged children can benefit from audiovisual semantic congruency during memory encoding.

    PubMed

    Heikkilä, Jenni; Tiippana, Kaisa

    2016-05-01

    Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children. PMID:26048162

  2. Neural correlates of audiovisual integration of semantic category information.

    PubMed

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-04-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded during a words-categorization task with stimuli presented in the auditory-visual modality. In the experiment, congruency of the visual and auditory stimuli was manipulated. Results showed that within the window of about 180-210 ms post-stimulus more positive values were elicited by category-congruent audiovisual stimuli than category-incongruent audiovisual stimuli. This indicates that the late frontal-central audiovisual interaction is related to audiovisual integration of semantic category information.

  3. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.

  4. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation. PMID:26351991

  5. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. PMID:25269620

  6. Infants' preference for native audiovisual speech dissociated from congruency preference.

    PubMed

    Shaw, Kathleen; Baart, Martijn; Depowski, Nicole; Bortfeld, Heather

    2015-01-01

    Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  7. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  8. ERPs and Contextual Semantic Discrimination: Degrees of Congruence in Wakefulness and Sleep

    ERIC Educational Resources Information Center

    Ibanez, Agustin; Lopez, Vladimir; Cornejo, Carlos

    2006-01-01

    This study explores whether the brain can discriminate degrees of semantic congruency during wakefulness and sleep. Experiment 1 was conducted during wakefulness to test degrees of congruency by means of N400 amplitude. In Experiment 2, the same paradigm was applied to a different group of participants during natural night sleep. Stimuli were 108…

  9. Semantic congruence affects hippocampal response to repetition of visual associations.

    PubMed

    McAndrews, Mary Pat; Girard, Todd A; Wilkins, Leanne K; McCormick, Cornelia

    2016-09-01

    Recent research has shown complementary engagement of the hippocampus and medial prefrontal cortex (mPFC) in encoding and retrieving associations based on pre-existing or experimentally-induced schemas, such that the latter supports schema-congruent information whereas the former is more engaged for incongruent or novel associations. Here, we attempted to explore some of the boundary conditions in the relative involvement of those structures in short-term memory for visual associations. The current literature is based primarily on intentional evaluation of schema-target congruence and on study-test paradigms with relatively long delays between learning and retrieval. We used a continuous recognition paradigm to investigate hippocampal and mPFC activation to first and second presentations of scene-object pairs as a function of semantic congruence between the elements (e.g., beach-seashell versus schoolyard-lamp). All items were identical at first and second presentation and the context scene, which was presented 500ms prior to the appearance of the target object, was incidental to the task which required a recognition response to the central target only. Very short lags 2-8 intervening stimuli occurred between presentations. Encoding the targets with congruent contexts was associated with increased activation in visual cortical regions at initial presentation and faster response time at repetition, but we did not find enhanced activation in mPFC relative to incongruent stimuli at either presentation. We did observe enhanced activation in the right anterior hippocampus, as well as regions in visual and lateral temporal and frontal cortical regions, for the repetition of incongruent scene-object pairs. This pattern demonstrates rapid and incidental effects of schema processing in hippocampal, but not mPFC, engagement during continuous recognition. PMID:27449709

  10. The role of semantic self-perceptions in temporal distance perceptions toward autobiographical events: the semantic congruence model.

    PubMed

    Gebauer, Jochen E; Haddock, Geoffrey; Broemer, Philip; von Hecker, Ulrich

    2013-11-01

    Why do some autobiographical events feel as if they happened yesterday, whereas others feel like ancient history? Such temporal distance perceptions have surprisingly little to do with actual calendar time distance. Instead, psychologists have found that people typically perceive positive autobiographical events as overly recent, while perceiving negative events as overly distant. The origins of this temporal distance bias have been sought in self-enhancement strivings and mood congruence between autobiographical events and chronic mood. As such, past research exclusively focused on the evaluative features of autobiographical events, while neglecting semantic features. To close this gap, we introduce a semantic congruence model. Capitalizing on the Big Two self-perception dimensions, Study 1 showed that high semantic congruence between recalled autobiographical events and trait self-perceptions render the recalled events subjectively recent. Specifically, interpersonally warm (competent) individuals perceived autobiographical events reflecting warmth (competence) as relatively recent, but warm (competent) individuals did not perceive events reflecting competence (warmth) as relatively recent. Study 2 found that conscious perceptions of congruence mediate these effects. Studies 3 and 4 showed that neither mood congruence nor self-enhancement account for these results. Study 5 extended the results from the Big Two to the Big Five self-perception dimensions, while affirming the independence of the semantic congruence model from evaluative influences.

  11. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams

    PubMed Central

    Su, Yi-Huang

    2014-01-01

    Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli. PMID:25538576

  12. Congruency effects in the letter search task: semantic activation in the absence of priming.

    PubMed

    Hutchison, Keith A; Bosco, Frank A

    2007-04-01

    Semantic priming is typically eliminated when participants perform a letter search on the prime, suggesting that semantic activation is conditional upon one's attentional goals. However, in such studies, semantic activation (or the lack thereof) is not measured during the letter search task itself but, instead, is inferred on the basis of the responses given to a later target. In the present study, direct online evidence for semanticactivation was tested using words whose meaning should bias either a positive or a negative response (e.g.,present vs. absent). In Experiment 1, a semantic congruency effect was obtained, with faster responses when the word meaning matched the required response. Experiment 2 replicated the congruency effect while, simultaneously, showing the elimination of semantic priming. It is concluded that letter search does not affect the initiation of semantic activation. Possible accounts for the elimination of priming following letter search include activation-based suppression and transfer-inappropriate processing.

  13. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  14. Delineating the Effect of Semantic Congruency on Episodic Memory: The Role of Integration and Relatedness

    PubMed Central

    Bein, Oded; Livneh, Neta; Reggev, Niv; Gilead, Michael; Goshen-Gottstein, Yonatan; Maril, Anat

    2015-01-01

    A fundamental challenge in the study of learning and memory is to understand the role of existing knowledge in the encoding and retrieval of new episodic information. The importance of prior knowledge in memory is demonstrated in the congruency effect—the robust finding wherein participants display better memory for items that are compatible, rather than incompatible, with their pre-existing semantic knowledge. Despite its robustness, the mechanism underlying this effect is not well understood. In four studies, we provide evidence that demonstrates the privileged explanatory power of the elaboration-integration account over alternative hypotheses. Furthermore, we question the implicit assumption that the congruency effect pertains to the truthfulness/sensibility of a subject-predicate proposition, and show that congruency is a function of semantic relatedness between item and context words. PMID:25695759

  15. Congruence Effect in Semantic Categorization with Masked Primes with Narrow and Broad Categories

    ERIC Educational Resources Information Center

    Quinn, Wendy Maree; Kinoshita, Sachiko

    2008-01-01

    In semantic categorization, masked primes that are category-congruent with the target (e.g., "Planets: mars-VENUS") facilitate responses relative to category-incongruent primes (e.g., "tree-VENUS"). The present study investigated why this category congruence effect is more consistently found with narrow categories (e.g., "Numbers larger/smaller…

  16. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    PubMed

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both

  17. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults

    PubMed Central

    Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when

  18. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    PubMed

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both

  19. Neural Correlates of Audiovisual Integration of Semantic Category Information

    ERIC Educational Resources Information Center

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  20. Audio-visual congruency alters power and coherence of oscillatory activity within and between cortical areas.

    PubMed

    Lange, Joachim; Christian, Nadine; Schnitzler, Alfons

    2013-10-01

    Dynamic communication between functionally specialized, but spatially distributed areas of the brain is essential for effective brain functioning. A candidate mechanism for effective neuronal communication is oscillatory neuronal synchronization. Here, we used magnetoencephalography (MEG) to study the role of oscillatory neuronal synchronization in audio-visual speech perception. Subjects viewed congruent audio-visual stimuli of a speaker articulating the vowels /a/ or /o/. In addition, we presented modified, incongruent versions in which visual and auditory signals mismatched. We identified a left hemispheric network for processing congruent audio-visual speech as well as network interaction between areas: low frequency (4-12 Hz) power was suppressed for congruent stimuli at auditory onset around auditory cortex, while power in the high gamma (120-140 Hz)-band was enhanced in the Broca's area around auditory offset. In addition, beta-power (20-30 Hz) was suppressed in supramarginal gyrus for incongruent stimuli. Interestingly, coherence analysis revealed a functional coupling between auditory cortex and Broca's area for congruent stimuli demonstrated by an increase of coherence. In contrast, coherence decreased for incongruent stimuli, suggesting a decoupling of auditory cortex and Broca's area. In addition, the increase of coherence was positively correlated with the increase of high gamma-power. The results demonstrate that oscillatory power in several frequency bands correlates with the processing of matching audio-visual speech on a large spatio-temporal scale. The findings provide evidence that coupling of neuronal groups can be mediated by coherence in the theta/alpha band and that low frequency coherence and high frequency power modulations are correlated in audio-visual speech perception.

  1. Between- and within-Ear Congruency and Laterality Effects in an Auditory Semantic/Emotional Prosody Conflict Task

    ERIC Educational Resources Information Center

    Techentin, Cheryl; Voyer, Daniel; Klein, Raymond M.

    2009-01-01

    The present study investigated the influence of within- and between-ear congruency on interference and laterality effects in an auditory semantic/prosodic conflict task. Participants were presented dichotically with words (e.g., mad, sad, glad) pronounced in either congruent or incongruent emotional tones (e.g., angry, happy, or sad) and…

  2. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  3. Prosodic expectations in silent reading: ERP evidence from rhyme scheme and semantic congruence in classic Chinese poems.

    PubMed

    Chen, Qingrong; Zhang, Jingjing; Xu, Xiaodong; Scheepers, Christoph; Yang, Yiming; Tanenhaus, Michael K

    2016-09-01

    In an ERP study, classic Chinese poems with a well-known rhyme scheme were used to generate an expectation of a rhyme in the absence of an expectation for a specific character. Critical characters were either consistent or inconsistent with the expected rhyme scheme and semantically congruent or incongruent with the content of the poem. These stimuli allowed us to examine whether a top-down rhyme scheme expectation would affect relatively early components of the ERP associated with character-to-sound mapping (P200) and lexically-mediated semantic processing (N400). The ERP data revealed that rhyme scheme congruence, but not semantic congruence modulated the P200: rhyme-incongruent characters elicited a P200 effect across the head demonstrating that top-down expectations influence early phonological coding of the character before lexical-semantic processing. Rhyme scheme incongruence also produced a right-lateralized N400-like effect. Moreover, compared to semantically congruous poems, semantically incongruous poems produced a larger N400 response only when the character was consistent with the expected rhyme scheme. The results suggest that top-down prosodic expectations can modulate early phonological processing in visual word recognition, indicating that prosodic expectations might play an important role in silent reading. They also suggest that semantic processing is influenced by general knowledge of text genre. PMID:27228392

  4. Effects of congruency on localization of audiovisual three-dimensional speech sounds Part IIa

    NASA Astrophysics Data System (ADS)

    Riederer, Klaus A. J.

    2003-10-01

    Part two of the current study [J. Acoust. Soc. Am., this issue] investigated localization of virtual audiovisual speech under exactly the same conditions. Perceived directions were signified by pushing keypad-buttons. Inside-the-head localization occurred almost only for the median-plane stimuli, being insignificant of the stimulus type (7.62% congruent, 9.38% incongruent, and 6.54% auditory-only) and disregarded from further analyses. The mean of correct answers was 46.81%. Factorial within-subjects ANOVA showed no significance on acoustic stimuli (/ipi/, /iti/) or stimulus type but showed strong dependence on direction (p=0.000015) and its interactions with acoustic stimuli (p=0.015374) and stimulus type (p=0.00812). Reaction times were highly dependent on direction (p=0.000002). From the 384 frontal location answers (azimuths 0°, +/-40°) 25.52% congruent, 28.39% incongruent, and 28.65% auditory-only were perceived as backward confused, for 0° azimuth only the corresponding values were 28.13%, 28.13%, and 35.94%. Back-front confusions were 13.80%, 9.64%, and 8.85% (azimuths 180°, +/-130°), and 18.75%, 14.06%, and 14.06% (azimuth 180°). Seeing the (congruently) talking face biased the localization more to the front, especially for the median-backward sounds. Obviously, vision overcomes weaker monaural localization cues as in the ventriloquism effect [Driver, Nature (London) 381, 66-68 (1996)]. [Work supported by Graduate School of Electronics, Telecommunication and Automation.

  5. Semantic Facilitation in Category and Action Naming: Testing the Message-Congruency Account

    ERIC Educational Resources Information Center

    Kuipers, Jan-Rouke; La Heij, Wido

    2008-01-01

    Basic-level picture naming is hampered by the presence of a semantically related context word (compared to an unrelated word), whereas picture categorization is facilitated by a semantically related context word. This reversal of the semantic context effect has been explained by assuming that in categorization tasks, basic-level distractor words…

  6. Audiovisual integration facilitates unconscious visual scene processing.

    PubMed

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration.

  7. A Further Look at Semantic Context Effects in Language Production: The Role of Response Congruency

    ERIC Educational Resources Information Center

    Kuipers, Jan-Rouke; La Heij, Wido; Costa, Albert

    2006-01-01

    Most current models of speech production predict interference from related context words in picture-naming tasks. However, Glaser and Dungelhoff (1984) reported semantic facilitation when the task was changed from basic-level naming to category-level naming. The authors explore two proposals to account for this change in polarity of the semantic…

  8. Semantic Size and Contextual Congruency Effects during Reading: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Wei, Wei; Cook, Anne E.

    2016-01-01

    Recent lexical decision studies have produced conflicting evidence about whether an object's semantic size influences word recognition. The present study examined this variable in online reading. Target words representing small and large objects were embedded in sentence contexts that were either neutral, congruent, or incongruent with respect to…

  9. Audiovisual speech integration in autism spectrum disorder: ERP evidence for atypicalities in lexical-semantic processing

    PubMed Central

    Megnin, Odette; Flitton, Atlanta; Jones, Catherine; de Haan, Michelle; Baldeweg, Torsten; Charman, Tony

    2013-01-01

    Lay Abstract Language and communicative impairments are among the primary characteristics of autism spectrum disorders (ASD). Previous studies have examined auditory language processing in ASD. However, during face-to-face conversation, auditory and visual speech inputs provide complementary information, and little is known about audiovisual (AV) speech processing in ASD. It is possible to elucidate the neural correlates of AV integration by examining the effects of seeing the lip movements accompanying the speech (visual speech) on electrophysiological event-related potentials (ERP) to spoken words. Moreover, electrophysiological techniques have a high temporal resolution and thus enable us to track the time-course of spoken word processing in ASD and typical development (TD). The present study examined the ERP correlates of AV effects in three time windows that are indicative of hierarchical stages of word processing. We studied a group of TD adolescent boys (n=14) and a group of high-functioning boys with ASD (n=14). Significant group differences were found in AV integration of spoken words in the 200–300ms time window when spoken words start to be processed for meaning. These results suggest that the neural facilitation by visual speech of spoken word processing is reduced in individuals with ASD. Scientific Abstract In typically developing (TD) individuals, behavioural and event-related potential (ERP) studies suggest that audiovisual (AV) integration enables faster and more efficient processing of speech. However, little is known about AV speech processing in individuals with autism spectrum disorder (ASD). The present study examined ERP responses to spoken words to elucidate the effects of visual speech (the lip movements accompanying a spoken word) on the range of auditory speech processing stages from sound onset detection to semantic integration. The study also included an AV condition which paired spoken words with a dynamic scrambled face in order to

  10. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode. PMID:15694622

  11. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode. PMID:14664072

  12. When Hearing the Bark Helps to Identify the Dog: Semantically-Congruent Sounds Modulate the Identification of Masked Pictures

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2010-01-01

    We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the…

  13. Congruence Reconsidered.

    ERIC Educational Resources Information Center

    Tudor, Keith; Worrall, Mike

    1994-01-01

    Discusses Carl Rogers' definitions of congruence, and identifies four specific requirements for the concept and practice of therapeutic congruence. Examines the interface between congruence and the other necessary and sufficient conditions of change, drawing on examples from practice. (JPS)

  14. Extracting semantics from audio-visual content: the final frontier in multimedia retrieval.

    PubMed

    Naphade, M R; Huang, T S

    2002-01-01

    Multimedia understanding is a fast emerging interdisciplinary research area. There is tremendous potential for effective use of multimedia content through intelligent analysis. Diverse application areas are increasingly relying on multimedia understanding systems. Advances in multimedia understanding are related directly to advances in signal processing, computer vision, pattern recognition, multimedia databases, and smart sensors. We review the state-of-the-art techniques in multimedia retrieval. In particular, we discuss how multimedia retrieval can be viewed as a pattern recognition problem. We discuss how reliance on powerful pattern recognition and machine learning techniques is increasing in the field of multimedia retrieval. We review the state-of-the-art multimedia understanding systems with particular emphasis on a system for semantic video indexing centered around multijects and multinets. We discuss how semantic retrieval is centered around concepts and context and the various mechanisms for modeling concepts and context. PMID:18244476

  15. N1 enhancement in synesthesia during visual and audio-visual perception in semantic cross-modal conflict situations: an ERP study.

    PubMed

    Sinke, Christopher; Neufeld, Janina; Wiswede, Daniel; Emrich, Hinderk M; Bleich, Stefan; Münte, Thomas F; Szycik, Gregor R

    2014-01-01

    Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.

  16. Congruence of Meaning.

    ERIC Educational Resources Information Center

    Suppes, Patrick

    By looking at the history of geometry and the concept of congruence in geometry we can get a new perspective on how to think about the closeness in meaning of two sentences. As in the analysis of congruence in geometry, a definite and concrete set of proposals about congruence of meaning depends essentially on the kind of theoretical framework…

  17. Perceived Odor-Taste Congruence Influences Intensity and Pleasantness Differently.

    PubMed

    Amsellem, Sherlley; Ohla, Kathrin

    2016-10-01

    The role of congruence in cross-modal interactions has received little attention. In most experiments involving cross-modal pairs, congruence is conceived of as a binary process according to which cross-modal pairs are categorized as perceptually and/or semantically matching or mismatching. The present study investigated whether odor-taste congruence can be perceived gradually and whether congruence impacts other facets of subjective experience, that is, intensity, pleasantness, and familiarity. To address these questions, we presented food odorants (chicken, orange, and 3 mixtures of the 2) and tastants (savory-salty and sour-sweet) in pairs varying in congruence. Participants were to report the perceived congruence of the pairs along with intensity, pleasantness, and familiarity. We found that participants could perceive distinct congruence levels, thereby favoring a multilevel account of congruence perception. In addition, familiarity and pleasantness followed the same pattern as the congruence while intensity was highest for the most congruent and the most incongruent pairs whereas intensities of the intermediary-congruent pairs were reduced. Principal component analysis revealed that pleasantness and familiarity form one dimension of the phenomenological experience of odor-taste pairs that was orthogonal to intensity. The results bear implications for the understanding the behavioral underpinnings of perseverance of habitual food choices. PMID:27384192

  18. AUDIOVISUAL HANDBOOK.

    ERIC Educational Resources Information Center

    JOHNSON, HARRY A.

    UNDERGRADUATE AND GRADUATE ACADEMIC OFFERINGS IN THE DEPARTMENT OF AUDIOVISUAL EDUCATION ARE LISTED, AND THE INSERVICE FACULTY TRAINING PROGRAM AND THE EXTENSION AND CONSULTANT SERVICES ARE DESCRIBED. GENERAL SERVICES OFFERED BY THE CENTER ARE A COLLEGE FILM SHOWING SERVICE, A CHILDREN'S THEATRE, A PRODUCTION WORKSHOP, AN EMBOSOGRAF PROCESS,…

  19. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  20. Electrophysiological evidence for speech-specific audiovisual integration.

    PubMed

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode.

  1. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  2. Audiovisual Interaction

    NASA Astrophysics Data System (ADS)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  3. The role of emotion in dynamic audiovisual integration of faces and voices.

    PubMed

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration.

  4. The level of audiovisual print-speech integration deficits in dyslexia.

    PubMed

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  5. Neural correlates of audiovisual speech processing in a second language.

    PubMed

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  6. Soft Congruence Relations over Rings

    PubMed Central

    Xin, Xiaolong; Li, Wenting

    2014-01-01

    Molodtsov introduced the concept of soft sets, which can be seen as a new mathematical tool for dealing with uncertainty. In this paper, we initiate the study of soft congruence relations by using the soft set theory. The notions of soft quotient rings, generalized soft ideals and generalized soft quotient rings, are introduced, and several related properties are investigated. Also, we obtain a one-to-one correspondence between soft congruence relations and idealistic soft rings and a one-to-one correspondence between soft congruence relations and soft ideals. In particular, the first, second, and third soft isomorphism theorems are established, respectively. PMID:24949493

  7. Vague Congruences and Quotient Lattice Implication Algebras

    PubMed Central

    Qin, Xiaoyan; Xu, Yang

    2014-01-01

    The aim of this paper is to further develop the congruence theory on lattice implication algebras. Firstly, we introduce the notions of vague similarity relations based on vague relations and vague congruence relations. Secondly, the equivalent characterizations of vague congruence relations are investigated. Thirdly, the relation between the set of vague filters and the set of vague congruences is studied. Finally, we construct a new lattice implication algebra induced by a vague congruence, and the homomorphism theorem is given. PMID:25133207

  8. Interpersonal Congruency, Attitude Similarity, and Interpersonal Attraction

    ERIC Educational Resources Information Center

    Touhey, John C.

    1975-01-01

    As no experimental study has examined the effects of congruency on attraction, the present investigation orthogonally varied attitude similarity and interpersonal congruency in order to compare the two independent variables as determinants of interpersonal attraction. (Author/RK)

  9. Congruence Couple Therapy for Pathological Gambling

    ERIC Educational Resources Information Center

    Lee, Bonnie K.

    2009-01-01

    Couple therapy models for pathological gambling are limited. Congruence Couple Therapy is an integrative, humanistic, systems model that addresses intrapsychic, interpersonal, intergenerational, and universal-spiritual disconnections of pathological gamblers and their spouses to shift towards congruence. Specifically, CCT's theoretical…

  10. Papers in Semantics. Working Papers in Linguistics No. 49.

    ERIC Educational Resources Information Center

    Yoon, Jae-Hak, Ed.; Kathol, Andreas, Ed.

    1996-01-01

    Papers on semantic theory and research include: "Presupposition, Congruence, and Adverbs of Quantification" (Mike Calcagno); "A Unified Account of '(Ta)myen'-Conditionals in Korean" (Chan Chung); "Spanish 'imperfecto' and 'preterito': Truth Conditions and Aktionsart Effects in a Situation Semantics" (Alicia Cipria, Craige Roberts); "Remarks on…

  11. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  12. Attentional Factors in Conceptual Congruency

    ERIC Educational Resources Information Center

    Santiago, Julio; Ouellet, Marc; Roman, Antonio; Valenzuela, Javier

    2012-01-01

    Conceptual congruency effects are biases induced by an irrelevant conceptual dimension of a task (e.g., location in vertical space) on the processing of another, relevant dimension (e.g., judging words' emotional evaluation). Such effects are a central empirical pillar for recent views about how the mind/brain represents concepts. In the present…

  13. AUDIOVISUAL EQUIPMENT STANDARDS.

    ERIC Educational Resources Information Center

    PATTERSON, PIERCE E.; AND OTHERS

    RECOMMENDED STANDARDS FOR AUDIOVISUAL EQUIPMENT WERE PRESENTED SEPARATELY FOR GRADES KINDERGARTEN THROUGH SIX, AND FOR JUNIOR AND SENIOR HIGH SCHOOLS. THE ELEMENTARY SCHOOL EQUIPMENT CONSIDERED WAS THE FOLLOWING--CLASSROOM LIGHT CONTROL, MOTION PICTURE PROJECTOR WITH MOBILE STAND AND SPARE REELS, COMBINATION 2 INCH X 2 INCH SLIDE AND FILMSTRIP…

  14. AUDIOVISUAL SERVICES CATALOG.

    ERIC Educational Resources Information Center

    Stockton Unified School District, CA.

    A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…

  15. Audiovisual Techniques Handbook.

    ERIC Educational Resources Information Center

    Hess, Darrel

    This handbook focuses on the use of 35mm slides for audiovisual presentations, particularly as an alternative to the more expensive and harder to produce medium of video. Its point of reference is creating slide shows about experiences in the Peace Corps; however, recommendations offered about both basic production procedures and enhancements are…

  16. Audiovisual Materials in Mathematics.

    ERIC Educational Resources Information Center

    Raab, Joseph A.

    This pamphlet lists five thousand current, readily available audiovisual materials in mathematics. These are grouped under eighteen subject areas: Advanced Calculus, Algebra, Arithmetic, Business, Calculus, Charts, Computers, Geometry, Limits, Logarithms, Logic, Number Theory, Probability, Soild Geometry, Slide Rule, Statistics, Topology, and…

  17. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  18. Selected Mental Health Audiovisuals.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  19. Orthographic dependency in the neural correlates of reading: evidence from audiovisual integration in English readers.

    PubMed

    Holloway, Ian D; van Atteveldt, Nienke; Blomert, Leo; Ansari, Daniel

    2015-06-01

    Reading skills are indispensible in modern technological societies. In transparent alphabetic orthographies, such as Dutch, reading skills build on associations between letters and speech sounds (LS pairs). Previously, we showed that the superior temporal cortex (STC) of Dutch readers is sensitive to the congruency of LS pairs. Here, we used functional magnetic resonance imaging to investigate whether a similar congruency sensitivity exists in STC of readers of the more opaque English orthography, where the relation among LS pairs is less reliable. Eighteen subjects passively perceived congruent and incongruent audiovisual pairs of different levels of transparency in English: letters and speech sounds (LS; irregular), letters and letter names (LN; fairly transparent), and numerals and number names (NN; transparent). In STC, we found congruency effects for NN and LN, but no effects in the predicted direction (congruent > incongruent) for LS pairs. These findings contrast with previous results obtained from Dutch readers. These data indicate that, through education, the STC becomes tuned to the congruency of transparent audiovisual pairs, but suggests a different neural processing of irregular mappings. The orthographic dependency of LS integration underscores cross-linguistic differences in the neural basis of reading and potentially has important implications for dyslexia interventions across languages.

  20. Fly Photoreceptors Encode Phase Congruency

    PubMed Central

    Friederich, Uwe; Billings, Stephen A.; Hardie, Roger C.; Juusola, Mikko; Coca, Daniel

    2016-01-01

    More than five decades ago it was postulated that sensory neurons detect and selectively enhance behaviourally relevant features of natural signals. Although we now know that sensory neurons are tuned to efficiently encode natural stimuli, until now it was not clear what statistical features of the stimuli they encode and how. Here we reverse-engineer the neural code of Drosophila photoreceptors and show for the first time that photoreceptors exploit nonlinear dynamics to selectively enhance and encode phase-related features of temporal stimuli, such as local phase congruency, which are invariant to changes in illumination and contrast. We demonstrate that to mitigate for the inherent sensitivity to noise of the local phase congruency measure, the nonlinear coding mechanisms of the fly photoreceptors are tuned to suppress random phase signals, which explains why photoreceptor responses to naturalistic stimuli are significantly different from their responses to white noise stimuli. PMID:27336733

  1. Fly Photoreceptors Encode Phase Congruency.

    PubMed

    Friederich, Uwe; Billings, Stephen A; Hardie, Roger C; Juusola, Mikko; Coca, Daniel

    2016-01-01

    More than five decades ago it was postulated that sensory neurons detect and selectively enhance behaviourally relevant features of natural signals. Although we now know that sensory neurons are tuned to efficiently encode natural stimuli, until now it was not clear what statistical features of the stimuli they encode and how. Here we reverse-engineer the neural code of Drosophila photoreceptors and show for the first time that photoreceptors exploit nonlinear dynamics to selectively enhance and encode phase-related features of temporal stimuli, such as local phase congruency, which are invariant to changes in illumination and contrast. We demonstrate that to mitigate for the inherent sensitivity to noise of the local phase congruency measure, the nonlinear coding mechanisms of the fly photoreceptors are tuned to suppress random phase signals, which explains why photoreceptor responses to naturalistic stimuli are significantly different from their responses to white noise stimuli. PMID:27336733

  2. Dissociating verbal and nonverbal audiovisual object processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  3. Massive Gravitons on Bohmian Congruences

    NASA Astrophysics Data System (ADS)

    Fathi, Mohsen; Mohseni, Morteza

    2016-08-01

    Taking a quantum corrected form of Raychaudhuri equation in a geometric background described by a Lorentz-violating massive theory of gravity, we go through investigating a time-like congruence of massive gravitons affected by a Bohmian quantum potential. We find some definite conditions upon which these gravitons are confined to diverging Bohmian trajectories. The respective behaviour of those quantum potentials are also derived and discussed. Additionally, and through a relativistic quantum treatment of a typical wave function, we demonstrate schematic conditions on the associated frequency to the gravitons, in order to satisfy the necessity of divergence.

  4. Multiplex congruence network of natural numbers

    PubMed Central

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-01-01

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations. PMID:27029650

  5. Multiplex congruence network of natural numbers.

    PubMed

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-01-01

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations. PMID:27029650

  6. Multiplex congruence network of natural numbers.

    PubMed

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-31

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  7. Minimum Conditions for Congruence of Quadrilaterals.

    ERIC Educational Resources Information Center

    Vance, Irvin E.

    1982-01-01

    A complete characterization of minimum conditions for congruence of quadrilaterals is presented. Convex quadrilaterals are treated first, then concave quadrilaterals are considered. A study of such minimum conditions is seen to provide some interesting and important activities for students. Only background in triangle congruence is necessary. (MP)

  8. Multiplex congruence network of natural numbers

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-01

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  9. Taxonomic congruence in Eskimoid populations.

    PubMed

    Zegura, S L

    1975-09-01

    The study compares distance relationships in Eskimoid populations based on metric and attribute data with linguistic relationships based on structural and lexicostatistical data. Taxonomic congruence and the non-specificity hypothesis are investigated by matrix correlations and by a clustering procedure. The matrix correlation approaches employed are the Pearson product-moment correlation coefficient and the Spearman rank-order correlation coefficient. An unweighted pair-group clustering procedure provides a visual comparison of biological and linguistic relationships. Data consist of 74 craniometric measurements and 28 cranial observations taken on 12 Eskimoid populations. Mahalanobis' D2 and Balakrishnan and Sanghvi's B2 were used to compute the metric and attribute distances, respectively. The results indicate that a strict adherence to the non-specificity hypothesis is untenable. Also, there is better concordance between the sexes for metric distances than for attribute distances, and the metric data are more concordant with linguistic relationships than are the attribute data.

  10. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. PMID:26280911

  11. Audio-visual speech perception: a developmental ERP investigation.

    PubMed

    Knowland, Victoria C P; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S C

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  12. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  13. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  14. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

  15. Congruence and Career Change in Employed Adults

    PubMed

    Oleski; Subich

    1996-12-01

    Holland's theory of congruence was applied to adults changing careers. Forty-two nontraditional students attending college to attain a new occupation were surveyed. The group's average experience in the work force was 14.5 years, and their average age was 34.4 years. Using the C index (Brown & Gore, 1994) and Kwak and Pulvino's (1982) K-P index to measure congruence, data supported the hypothesis that employed adults in the process of changing their careers move in a direction of greater congruence. Further, job satisfaction was correlated significantly with congruence as operationalized by the C index (r = .33, p < .03) and the K-P index (r = .32, p < .04). PMID:8980082

  16. Congruences for the Andrews spt function.

    PubMed

    Ono, Ken

    2011-01-11

    Ramanujan-type congruences for the Andrews spt(n) partition function have been found for prime moduli 5 ≤ ℓ ≤ 37 in the work of Andrews [Andrews GE, (2008) J Reine Angew Math 624:133-142] and Garvan [Garvan F, (2010) Int J Number Theory 6:1-29]. We exhibit unexpectedly simple congruences for all ℓ≥5. Confirming a conjecture of Garvan, we show that if ℓ≥5 is prime and (-δ/ℓ) = 1, then spt[(ℓ2(ℓn+δ)+1)/24] ≡ 0 (mod ℓ). This congruence gives (ℓ - 1)/2 arithmetic progressions modulo ℓ(3) which support a mod ℓ congruence. This result follows from the surprising fact that the reduction of a certain mock theta function modulo ℓ, for every ℓ≥5, is an eigenform of the Hecke operator T(ℓ(2)).

  17. Event congruency and episodic encoding: a developmental fMRI study.

    PubMed

    Maril, Anat; Avital, Rinat; Reggev, Niv; Zuckerman, Maya; Sadeh, Talya; Ben Sira, Liat; Livneh, Neta

    2011-09-01

    A known contributor to adults' superior memory performance compared to children is their differential reliance on an existing knowledge base. Compared to those of adults, children's semantic networks are less accessible and less established, a difference that is also thought to contribute to children's relative resistance to semantically related false alarms. Using the "congruency effect" - the memory advantage of congruity, we manipulated the encoded stimuli in the present experiment such that the use of the knowledge base at encoding was more - or less - accessible in both children and adults. While being scanned, 15 children (ages 8-11) and 18 young adults saw printed noun/color combinations and were asked to indicate whether each combination existed in nature. A subsequent recognition test was administered outside of the scanner. Behaviorally, although overall memory was higher in the adult group compared to the children, both age groups showed the congruency effect to the same extent. A comparison of the neural substrates supporting the congruency effect between adults and children revealed that whereas adults recruited regions primarily associated with semantic-conceptual processing (e.g., the left PFC and parietal and occipito-temporal cortices), children recruited regions earlier in the processing stream (e.g., the right occipital cortex). This evidence supports the hypothesis that early in development, episodic encoding depends more on perceptual systems, whereas top-down frontal control and parietal structures become more prominent in the encoding process with age. This developmental switch contributes to adults' superior memory performance but may render adults more vulnerable to committing semantically based errors. PMID:21777596

  18. Hearing (rivaling) lips and seeing voices: how audiovisual interactions modulate perceptual stabilization in binocular rivalry

    PubMed Central

    Vidal, Manuel; Barrès, Victor

    2014-01-01

    In binocular rivalry (BR), sensory input remains the same yet subjective experience fluctuates irremediably between two mutually exclusive representations. We investigated the perceptual stabilization effect of an additional sound on the BR dynamics using speech stimuli known to involve robust audiovisual (AV) interactions at several cortical levels. Subjects sensitive to the McGurk effect were presented looping videos of rivaling faces uttering /aba/ and /aga/, respectively, while synchronously hearing the voice /aba/. They reported continuously the dominant percept, either observing passively or trying actively to promote one of the faces. The few studies that investigated the influence of information from an external modality on perceptual competition reported results that seem at first sight inconsistent. Since these differences could stem from how well the modalities matched, we addressed this by comparing two levels of AV congruence: real (/aba/ viseme) vs. illusory (/aga/ viseme producing the /ada/ McGurk fusion). First, adding the voice /aba/ stabilized both real and illusory congruent lips percept. Second, real congruence of the added voice improved volitional control whereas illusory congruence did not, suggesting a graded contribution to the top-down sensitivity control of selective attention. In conclusion, a congruent sound enhanced considerably attentional control over the perceptual outcome selection; however, differences between passive stabilization and active control according to AV congruency suggest these are governed by two distinct mechanisms. Based on existing theoretical models of BR, selective attention and AV interaction in speech perception, we provide a general interpretation of our findings. PMID:25237302

  19. Generative Semantics.

    ERIC Educational Resources Information Center

    King, Margaret

    The first section of this paper deals with the attempts within the framework of transformational grammar to make semantics a systematic part of linguistic description, and outlines the characteristics of the generative semantics position. The second section takes a critical look at generative semantics in its later manifestations, and makes a case…

  20. Top-down attention regulates the neural expression of audiovisual integration.

    PubMed

    Morís Fernández, Luis; Visser, Maya; Ventura-Campos, Noelia; Ávila, César; Soto-Faraco, Salvador

    2015-10-01

    The interplay between attention and multisensory integration has proven to be a difficult question to tackle. There are almost as many studies showing that multisensory integration occurs independently from the focus of attention as studies implying that attention has a profound effect on integration. Addressing the neural expression of multisensory integration for attended vs. unattended stimuli can help disentangle this apparent contradiction. In the present study, we examine if selective attention to sound pitch influences the expression of audiovisual integration in both behavior and neural activity. Participants were asked to attend to one of two auditory speech streams while watching a pair of talking lips that could be congruent or incongruent with the attended speech stream. We measured behavioral and neural responses (fMRI) to multisensory stimuli under attended and unattended conditions while physical stimulation was kept constant. Our results indicate that participants recognized words more accurately from an auditory stream that was both attended and audiovisually (AV) congruent, thus reflecting a benefit due to AV integration. On the other hand, no enhancement was found for AV congruency when it was unattended. Furthermore, the fMRI results indicated that activity in the superior temporal sulcus (an area known to be related to multisensory integration) was contingent on attention as well as on audiovisual congruency. This attentional modulation extended beyond heteromodal areas to affect processing in areas classically recognized as unisensory, such as the superior temporal gyrus or the extrastriate cortex, and to non-sensory areas such as the motor cortex. Interestingly, attention to audiovisual incongruence triggered responses in brain areas related to conflict processing (i.e., the anterior cingulate cortex and the anterior insula). Based on these results, we hypothesize that AV speech integration can take place automatically only when both

  1. Audio-Visual Aids: Historians in Blunderland.

    ERIC Educational Resources Information Center

    Decarie, Graeme

    1988-01-01

    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  2. [Audio-visual aids and tropical medicine].

    PubMed

    Morand, J J

    1989-01-01

    The author presents a list of the audio-visual productions about Tropical Medicine, as well as of their main characteristics. He thinks that the audio-visual educational productions are often dissociated from their promotion; therefore, he invites the future creator to forward his work to the Audio-Visual Health Committee.

  3. Towards Postmodernist Television: INA's Audiovisual Magazine Programmes.

    ERIC Educational Resources Information Center

    Boyd-Bowman, Susan

    Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines, "Hieroglyphes,"…

  4. Data congruence, paedomorphosis and salamanders

    PubMed Central

    Struck, Torsten H

    2007-01-01

    Background The retention of ancestral juvenile characters by adult stages of descendants is called paedomorphosis. However, this process can mislead phylogenetic analyses based on morphological data, even in combination with molecular data, because the assessment if a character is primary absent or secondary lost is difficult. Thus, the detection of incongruence between morphological and molecular data is necessary to investigate the reliability of simultaneous analyses. Different methods have been proposed to detect data congruence or incongruence. Five of them (PABA, PBS, NDI, LILD, DRI) are used herein to assess incongruence between morphological and molecular data in a case study addressing salamander phylogeny, which comprises several supposedly paedomorphic taxa. Therefore, previously published data sets were compiled herein. Furthermore, two strategies ameliorating effects of paedomorphosis on phylogenetic studies were tested herein using a statistical rigor. Additionally, efficiency of the different methods to assess incongruence was analyzed using this empirical data set. Finally, a test statistic is presented for all these methods except DRI. Results The addition of morphological data to molecular data results in both different positions of three of the four paedomorphic taxa and strong incongruence, but treating the morphological data using different strategies ameliorating the negative impact of paedomorphosis revokes these changes and minimizes the conflict. Of these strategies the strategy to just exclude paedomorphic character traits seem to be most beneficial. Of the three molecular partitions analyzed herein the RAG1 partition seems to be the most suitable to resolve deep salamander phylogeny. The rRNA and mtDNA partition are either too conserved or too variable, respectively. Of the different methods to detect incongruence, the NDI and PABA approaches are more conservative in the indication of incongruence than LILD and PBS. Conclusion

  5. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    PubMed

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-06-30

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC.

  6. Neurofunctional underpinnings of audiovisual emotion processing in teens with autism spectrum disorders.

    PubMed

    Doyle-Thomas, Krissy A R; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B C

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system.

  7. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    PubMed

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  8. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    PubMed Central

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  9. Building Intuitive Arguments for the Triangle Congruence Conditions

    ERIC Educational Resources Information Center

    Piatek-Jimenez, Katrina

    2008-01-01

    The triangle congruence conditions are a central focus to nearly any course in Euclidean geometry. The author presents a hands-on activity that uses straws and pipe cleaners to explore and justify the triangle congruence conditions. (Contains 4 figures.)

  10. Contribution of prior semantic knowledge to new episodic learning in amnesia.

    PubMed

    Kan, Irene P; Alexander, Michael P; Verfaellie, Mieke

    2009-05-01

    We evaluated whether prior semantic knowledge would enhance episodic learning in amnesia. Subjects studied prices that are either congruent or incongruent with prior price knowledge for grocery and household items and then performed a forced-choice recognition test for the studied prices. Consistent with a previous report, healthy controls' performance was enhanced by price knowledge congruency; however, only a subset of amnesic patients experienced the same benefit. Whereas patients with relatively intact semantic systems, as measured by an anatomical measure (i.e., lesion involvement of anterior and lateral temporal lobes), experienced a significant congruency benefit, patients with compromised semantic systems did not experience a congruency benefit. Our findings suggest that when prior knowledge structures are intact, they can support acquisition of new episodic information by providing frameworks into which such information can be incorporated.

  11. Planning and Producing Audiovisual Materials.

    ERIC Educational Resources Information Center

    Kemp, Jerrold E.

    The first few chapters of this book are devoted to an examination of the changing character of audiovisual materials; instructional design and the selection of media to serve specific objectives; and principles of perception, communication, and learning. Relevant research findings in the field are reviewed. The basic techniques of planning…

  12. Current Developments in Audiovisual Cataloging.

    ERIC Educational Resources Information Center

    Graham, Paul

    1985-01-01

    This paper highlights significant advances in audiovisual cataloging theory and practice: development of "Anglo-American Cataloging Rules" (second edition); revision of the MARC Films Format; and project to provide cataloging-in-publication for microcomputer software. Evolution of rules and practices as an outgrowth of needs of the community is…

  13. Audiovisual Resources for Instructional Development.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.; And Others

    Provided is a compilation of recently annotated audiovisual materials which present techniques, models, or other specific information that can aid in providing comprehensive services to the handicapped. Entries which include a brief description, name of distributor, technical information, and cost are presented alphabetically by title in eight…

  14. A Computer Generated Audiovisuals Catalog.

    ERIC Educational Resources Information Center

    Bogen, Betty

    Eccles Medical Sciences Library at the University of Utah has developed a computer-generated catalog for its audiovisual health and medical materials. The catalog contains four sections: (1) the main listing of type of media, with descriptions, call numbers, and Medical Subject Headings (MeSH) used for each item; (2) a listing by title, with call…

  15. Audiovisual Speech Recalibration in Children

    ERIC Educational Resources Information Center

    van Linden, Sabine; Vroomen, Jean

    2008-01-01

    In order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on…

  16. Preventive Maintenance Handbook. Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Educational Products Information Exchange Inst., Stony Brook, NY.

    The preventive maintenance system for audiovisual equipment presented in this handbook is designed by specialists so that it can be used by nonspecialists in school sites. The report offers specific advice on saftey factors and also lists major problems that should not be handled by nonspecialists. Other aspects of a preventive maintenance system…

  17. Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This list of audiovisual materials for environmental education was prepared by the State of Minnesota, Department of Education, Division of Instruction, to accompany the pilot curriculum in environmental education. The majority of the materials listed are available from the University of Minnesota, or from state or federal agencies. The…

  18. Body Build Satisfaction and the Congruency of Body Build Perceptions.

    ERIC Educational Resources Information Center

    Hankins, Norman E.; Bailey, Roger C.

    1979-01-01

    Females were administered the somatotype rating scale. Satisfied subjects showed greater congruency between their own and wished-for body build, and greater congruency between their own and friend/date body builds, but less congruency between their own body build and the female stereotype. (Author/BEF)

  19. An Investigation of Person-Environment Congruence

    ERIC Educational Resources Information Center

    McMurray, Marissa Johnstun

    2013-01-01

    This study tested a hypothesis derived from Holland's (1997) theory of personality and environment that congruence between person and environment would influence satisfaction with doctoral training environments and career certainty. Doctoral students' (N = 292) vocational interests were measured using questions from the Interest Item Pool, and…

  20. Some Extensions of the Iachan Congruence Index.

    ERIC Educational Resources Information Center

    Iachan, Ronaldo

    1990-01-01

    Extends Iachan's congruence index (1984) to cover situations when ties occur in three-letter codes (as used in Holland's vocational theory) and when only two top letters are recorded and/or used. Suggests methodologies for breaking ties and for measuring agreement between two-letter codes and includes computer algorithm for computing the index.…

  1. A Congruence Approach to Syntax and Codeswitching.

    ERIC Educational Resources Information Center

    Sebba, Mark

    1998-01-01

    Argues that an adequate theory of codeswitching syntax is one that holds that the actual nature of the switching is relative not only to the language pairs, but also to other situational factors. Suggests that congruence of categories is constructed by bilinguals in a given situation with four alternative outcomes for the given candidate switch:…

  2. Bilingualism affects audiovisual phoneme identification.

    PubMed

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  3. Interplay Between the Object and Its Symbol: The Size-Congruency Effect

    PubMed Central

    Shen, Manqiong; Xie, Jiushu; Liu, Wenjuan; Lin, Wenjie; Chen, Zhuoming; Marmolejo-Ramos, Fernando; Wang, Ruiming

    2016-01-01

    Grounded cognition suggests that conceptual processing shares cognitive resources with perceptual processing. Hence, conceptual processing should be affected by perceptual processing, and vice versa. The current study explored the relationship between conceptual and perceptual processing of size. Within a pair of words, we manipulated the font size of each word, which was either congruent or incongruent with the actual size of the referred object. In Experiment 1a, participants compared object sizes that were referred to by word pairs. Higher accuracy was observed in the congruent condition (e.g., word pairs referring to larger objects in larger font sizes) than in the incongruent condition. This is known as the size-congruency effect. In Experiments 1b and 2, participants compared the font sizes of these word pairs. The size-congruency effect was not observed. In Experiments 3a and 3b, participants compared object and font sizes of word pairs depending on a task cue. Results showed that perceptual processing affected conceptual processing, and vice versa. This suggested that the association between conceptual and perceptual processes may be bidirectional but further modulated by semantic processing. Specifically, conceptual processing might only affect perceptual processing when semantic information is activated. The current study PMID:27512529

  4. Semantic Desktop

    NASA Astrophysics Data System (ADS)

    Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar

    In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.

  5. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset.

  6. The activation of semantic memory: effects of prime exposure, prime-target relationship, and task demands.

    PubMed

    Bueno, Steve; Frenck-Mestre, Cheryl

    2008-06-01

    Priming facilitation was examined under conditions of brief incremental prime exposures (28, 43, 71, and 199 msec) under masked conditions for two types of lexical relationships (associative-semantic pairs, such as "wolf-fox," and semantic-feature pairs, such as "whale-dolphin") and in two tasks (primed lexical decision and semantic categorization). The results of eight experiments revealed, first, that priming elicits faster response times for semantic-feature pairs. The associative-semantic pairs produced priming only at the longer prime exposures. Second, priming was observed earlier for semantic categorization than for the lexical decision task, in which priming was observed only at the longer stimulus onset asynchronies. Finally, our results allowed us to discredit the congruency hypothesis, according to which priming is due to a common categorical response for the prime and target words. The implications of these results for current theories of semantic priming are discussed.

  7. Extraction of composite visual objects from audiovisual materials

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal

    1999-08-01

    An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.

  8. Somatotopic Semantic Priming and Prediction in the Motor System

    PubMed Central

    Grisoni, Luigi; Dreyer, Felix R.; Pulvermüller, Friedemann

    2016-01-01

    The recognition of action-related sounds and words activates motor regions, reflecting the semantic grounding of these symbols in action information; in addition, motor cortex exerts causal influences on sound perception and language comprehension. However, proponents of classic symbolic theories still dispute the role of modality-preferential systems such as the motor cortex in the semantic processing of meaningful stimuli. To clarify whether the motor system carries semantic processes, we investigated neurophysiological indexes of semantic relationships between action-related sounds and words. Event-related potentials revealed that action-related words produced significantly larger stimulus-evoked (Mismatch Negativity-like) and predictive brain responses (Readiness Potentials) when presented in body-part-incongruent sound contexts (e.g., “kiss” in footstep sound context; “kick” in whistle context) than in body-part-congruent contexts, a pattern reminiscent of neurophysiological correlates of semantic priming. Cortical generators of the semantic relatedness effect were localized in areas traditionally associated with semantic memory, including left inferior frontal cortex and temporal pole, and, crucially, in motor areas, where body-part congruency of action sound–word relationships was indexed by a somatotopic pattern of activation. As our results show neurophysiological manifestations of action-semantic priming in the motor cortex, they prove semantic processing in the motor system and thus in a modality-preferential system of the human brain. PMID:26908635

  9. Gender affects semantic competition: the effect of gender in a non-gender-marking language.

    PubMed

    Fukumura, Kumiko; Hyönä, Jukka; Scholfield, Merete

    2013-07-01

    English speakers tend to produce fewer pronouns when a referential competitor has the same gender as the referent than otherwise. Traditionally, this gender congruence effect has been explained in terms of ambiguity avoidance (e.g., Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000; Fukumura, Van Gompel, & Pickering, 2010). However, an alternative hypothesis is that the competitor's gender congruence affects semantic competition, making the referent less accessible relative to when the competitor has a different gender (Arnold & Griffin, 2007). Experiment 1 found that even in Finnish, which is a nongendered language, the competitor's gender congruence results in fewer pronouns, supporting the semantic competition account. In Experiment 2, Finnish native speakers took part in an English version of the same experiment. The effect of gender congruence was larger in Experiment 2 than in Experiment 1, suggesting that the presence of a same-gender competitor resulted in a larger reduction in pronoun use in English than in Finnish. In contrast, other nonlinguistic similarity had similar effects in both experiments. This indicates that the effect of gender congruence in English is not entirely driven by semantic competition: Speakers also avoid gender-ambiguous pronouns. PMID:23356244

  10. Catalog of Audiovisual Materials Related to Rehabilitation.

    ERIC Educational Resources Information Center

    Mann, Joe, Ed.; Henderson, Jim, Ed.

    An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…

  11. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping.

  12. Audio-Visual Aids in Universities

    ERIC Educational Resources Information Center

    Douglas, Jackie

    1970-01-01

    A report on the proceedings and ideas expressed at a one day seminar on "Audio-Visual Equipment--Its Uses and Applications for Teaching and Research in Universities." The seminar was organized by England's National Committee for Audio-Visual Aids in Education in conjunction with the British Universities Film Council. (LS)

  13. Solar Energy Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.

    This directory presents an annotated bibliography of non-print information resources dealing with solar energy. The document is divided by type of audio-visual medium, including: (1) Films, (2) Slides and Filmstrips, and (3) Videotapes. A fourth section provides addresses and telephone numbers of audiovisual aids sources, and lists the page…

  14. Cross-taxon congruence and environmental conditions

    PubMed Central

    2010-01-01

    Background Diversity patterns of different taxa typically covary in space, a phenomenon called cross-taxon congruence. This pattern has been explained by the effect of one taxon diversity on taxon diversity, shared biogeographic histories of different taxa, and/or common responses to environmental conditions. A meta-analysis of the association between environment and diversity patterns found that in 83 out of 85 studies, more than 60% of the spatial variability in species richness was related to variables representing energy, water or their interaction. The role of the environment determining taxa diversity patterns leads us to hypothesize that this would explain the observed cross-taxon congruence. However, recent analyses reported the persistence of cross-taxon congruence when environmental effect was statistically removed. Here we evaluate this hypothesis, analyzing the cross-taxon congruence between birds and mammals in the Brazilian Cerrado, and assess the environmental role on the spatial covariation in diversity patterns. Results We found a positive association between avian and mammal richness and a positive latitudinal trend for both groups in the Brazilian Cerrado. Regression analyses indicated an effect of latitude, PET, and mean temperature over both biological groups. In addition, we show that NDVI was only associated with avian diversity; while the annual relative humidity, was only correlated with mammal diversity. We determined the environmental effects on diversity in a path analysis that accounted for 73% and 76% of the spatial variation in avian and mammal richness. However, an association between avian and mammal diversity remains significant. Indeed, the importance of this link between bird and mammal diversity was also supported by a significant association between birds and mammal spatial autoregressive model residuals. Conclusion Our study corroborates the main role of environmental conditions on diversity patterns, but suggests that other

  15. Audio-visual gender recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  16. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of...

  17. An audiovisual emotion recognition system

    NASA Astrophysics Data System (ADS)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  18. Response requirements modulate tactile spatial congruency effects.

    PubMed

    Gallace, Alberto; Soto-Faraco, Salvador; Dalton, Polly; Kreukniet, Bas; Spence, Charles

    2008-11-01

    Several recent studies have provided support for the view that tactile stimuli/events are remapped into an abstract spatial frame of reference beyond the initial somatotopic representation present in the primary somatosensory cortex. Here, we demonstrate for the first time that the extent to which this remapping of tactile stimuli takes place is dependent upon the particular demands imposed by the task that participants have to perform. Participants in the present study responded to either the elevation (up vs. down) or to the anatomical location (finger vs. thumb) of vibrotactile targets presented to one hand, while trying to ignore distractors presented simultaneously to the other hand. The magnitude and direction of the target-distractor congruency effect was measured as participants adopted one of two different postures with each hand (palm-up or palm-down). When the participants used footpedal responses (toe vs. heel; Experiment 1), congruency effects were determined by the relative elevation of the stimuli in external coordinates (same vs. different elevation), regardless of whether the relevant response feature was defined externally or anatomically. Even when participants responded verbally (Experiment 2), the influence of the relative elevation of the stimuli in external space, albeit attenuated, was still observed. However, when the task involved responding with the stimulated finger (four-alternative forced choice; Experiment 3), congruency effects were virtually eliminated. These findings support the view that tactile events can be remapped according to an abstract frame of reference resulting from multisensory integration, but that the frame of reference that is used while performing a particular task may depend to a large extent on the nature of the task demands. PMID:18709500

  19. Effect of Perceptual Load on Semantic Access by Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Herve

    2013-01-01

    Purpose: To examine whether semantic access by speech requires attention in children. Method: Children ("N" = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual- dynamic face) picture word task. The cross-modal task had a low load,…

  20. Generative Semantics

    ERIC Educational Resources Information Center

    Bagha, Karim Nazari

    2011-01-01

    Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of…

  1. Audio-visual interaction and perceptual assessment of water features used over road traffic noise.

    PubMed

    Galbrun, Laurent; Calarco, Francesca M A

    2014-11-01

    This paper examines the audio-visual interaction and perception of water features used over road traffic noise, including their semantic aural properties, as well as their categorization and evocation properties. The research focused on a wide range of small to medium sized water features that can be used in gardens and parks to promote peacefulness and relaxation. Paired comparisons highlighted the inter-dependence between uni-modal (audio-only or visual-only) and bi-modal (audio-visual) perception, indicating that equal attention should be given to the design of both stimuli. In general, natural looking features tended to increase preference scores (compared to audio-only paired comparison scores), while manmade looking features decreased them. Semantic descriptors showed significant correlations with preferences and were found to be more reliable design criteria than physical parameters. A principal component analysis identified three components within the nine semantic attributes tested: "emotional assessment," "sound quality," and "envelopment and temporal variation." The first two showed significant correlations with audio-only preferences, "emotional assessment" being the most important predictor of preferences, and its attributes naturalness, relaxation, and freshness also being significantly correlated with preferences. Categorization results indicated that natural stream sounds are easily identifiable (unlike waterfalls and fountains), while evocation results showed no unique relationship with preferences. PMID:25373962

  2. Audio-visual interaction and perceptual assessment of water features used over road traffic noise.

    PubMed

    Galbrun, Laurent; Calarco, Francesca M A

    2014-11-01

    This paper examines the audio-visual interaction and perception of water features used over road traffic noise, including their semantic aural properties, as well as their categorization and evocation properties. The research focused on a wide range of small to medium sized water features that can be used in gardens and parks to promote peacefulness and relaxation. Paired comparisons highlighted the inter-dependence between uni-modal (audio-only or visual-only) and bi-modal (audio-visual) perception, indicating that equal attention should be given to the design of both stimuli. In general, natural looking features tended to increase preference scores (compared to audio-only paired comparison scores), while manmade looking features decreased them. Semantic descriptors showed significant correlations with preferences and were found to be more reliable design criteria than physical parameters. A principal component analysis identified three components within the nine semantic attributes tested: "emotional assessment," "sound quality," and "envelopment and temporal variation." The first two showed significant correlations with audio-only preferences, "emotional assessment" being the most important predictor of preferences, and its attributes naturalness, relaxation, and freshness also being significantly correlated with preferences. Categorization results indicated that natural stream sounds are easily identifiable (unlike waterfalls and fountains), while evocation results showed no unique relationship with preferences.

  3. Assessing Outcomes through Congruence of Course Objectives and Reflective Work

    ERIC Educational Resources Information Center

    Lockyer, Jocelyn M.; Fidler, Herta; Hogan, David B.; Pereles, Laurie; Wright, Bruce; Lebeuf, Christine; Gerritsen, Cory

    2005-01-01

    Introduction: Course outcomes have been assessed by examining the congruence between statements of commitment to change (CTCs) and course objectives. Other forms of postcourse reflective exercises (for example, impact and unmet-needs statements) have not been examined for congruence with course objectives or their utility in assessing course…

  4. Attention Modulation by Proportion Congruency: The Asymmetrical List Shifting Effect

    ERIC Educational Resources Information Center

    Abrahamse, Elger L.; Duthoo, Wout; Notebaert, Wim; Risko, Evan F.

    2013-01-01

    Proportion congruency effects represent hallmark phenomena in current theorizing about cognitive control. This is based on the notion that proportion congruency determines the relative levels of attention to relevant and irrelevant information in conflict tasks. However, little empirical evidence exists that uniquely supports such an attention…

  5. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  6. Ways of making-sense: Local gamma synchronization reveals differences between semantic processing induced by music and language.

    PubMed

    Barraza, Paulo; Chavez, Mario; Rodríguez, Eugenio

    2016-01-01

    Similar to linguistic stimuli, music can also prime the meaning of a subsequent word. However, it is so far unknown what is the brain dynamics underlying the semantic priming effect induced by music, and its relation to language. To elucidate these issues, we compare the brain oscillatory response to visual words that have been semantically primed either by a musical excerpt or by an auditory sentence. We found that semantic violation between music-word pairs triggers a classical ERP N400, and induces a sustained increase of long-distance theta phase synchrony, along with a transient increase of local gamma activity. Similar results were observed after linguistic semantic violation except for gamma activity, which increased after semantic congruence between sentence-word pairs. Our findings indicate that local gamma activity is a neural marker that signals different ways of semantic processing between music and language, revealing the dynamic and self-organized nature of the semantic processing.

  7. Ways of making-sense: Local gamma synchronization reveals differences between semantic processing induced by music and language.

    PubMed

    Barraza, Paulo; Chavez, Mario; Rodríguez, Eugenio

    2016-01-01

    Similar to linguistic stimuli, music can also prime the meaning of a subsequent word. However, it is so far unknown what is the brain dynamics underlying the semantic priming effect induced by music, and its relation to language. To elucidate these issues, we compare the brain oscillatory response to visual words that have been semantically primed either by a musical excerpt or by an auditory sentence. We found that semantic violation between music-word pairs triggers a classical ERP N400, and induces a sustained increase of long-distance theta phase synchrony, along with a transient increase of local gamma activity. Similar results were observed after linguistic semantic violation except for gamma activity, which increased after semantic congruence between sentence-word pairs. Our findings indicate that local gamma activity is a neural marker that signals different ways of semantic processing between music and language, revealing the dynamic and self-organized nature of the semantic processing. PMID:26734990

  8. Target categorization with primes that vary in both congruency and sense modality.

    PubMed

    Weatherford, Kathryn; Mills, Michael; Porter, Anne M; Goolkasian, Paula

    2015-01-01

    In two experiments we examined conceptual priming within and across sense modalities by varying the modality (picture and environmental sounds) and the category congruency of prime-target pairs. Both experiments used a repetition priming paradigm, but Experiment 1 studied priming effects with a task that required a superordinate categorization response (man-made or natural), while Experiment 2 used a lower level category response (musical instruments or animal): one that was more closely associated with the basic level of the semantic network. Results from Experiment 1 showed a strong effect of target modality and two distinct patterns of conceptual priming effects with picture and environmental sound targets. However, no priming advantage was found when congruent and incongruent primes were compared. Results from Experiment 2, found congruency effects that were specific to environmental sound targets when preceded by picture primes. The findings provide support for the intermodal event file and multisensory framework, and suggest that auditory and visual features about a single item in a conceptual category may be more tightly connected than two different items from the same category.

  9. Target categorization with primes that vary in both congruency and sense modality

    PubMed Central

    Weatherford, Kathryn; Mills, Michael; Porter, Anne M.; Goolkasian, Paula

    2015-01-01

    In two experiments we examined conceptual priming within and across sense modalities by varying the modality (picture and environmental sounds) and the category congruency of prime-target pairs. Both experiments used a repetition priming paradigm, but Experiment 1 studied priming effects with a task that required a superordinate categorization response (man-made or natural), while Experiment 2 used a lower level category response (musical instruments or animal): one that was more closely associated with the basic level of the semantic network. Results from Experiment 1 showed a strong effect of target modality and two distinct patterns of conceptual priming effects with picture and environmental sound targets. However, no priming advantage was found when congruent and incongruent primes were compared. Results from Experiment 2, found congruency effects that were specific to environmental sound targets when preceded by picture primes. The findings provide support for the intermodal event file and multisensory framework, and suggest that auditory and visual features about a single item in a conceptual category may be more tightly connected than two different items from the same category. PMID:25667578

  10. The Current Status of Federal Audiovisual Policy and How These Policies Affect the National Audiovisual Center.

    ERIC Educational Resources Information Center

    Flood, R. Kevin

    The National Audiovisual Center was established in 1968 to provide a single organizational unit that serves as a central information point on completed audiovisual materials and a central sales point for the distribution of media that were produced by or for federal agencies. This speech describes the services the center can provide users of…

  11. A Programme for Semantics; Semantics and Its Critics; Semantics Shamantics.

    ERIC Educational Resources Information Center

    Goldstein, Laurence; Harris, Roy

    1990-01-01

    In a statement-response-reply format, a proposition concerning the study of semantics is made and debated in three papers by two authors. In the first paper, it is proposed that semantics is not the study of the concept of meaning, but rather a neurolinguistic issue, despite the fact that semantics is linked to context. It is argued that semantic…

  12. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five. PMID:27311292

  13. The perception of geometrical structure from congruence

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.; Wason, Thomas D.

    1989-01-01

    The principle function of vision is to measure the environment. As demonstrated by the coordination of motor actions with the positions and trajectories of moving objects in cluttered environments and by rapid recognition of solid objects in varying contexts from changing perspectives, vision provides real-time information about the geometrical structure and location of environmental objects and events. The geometric information provided by 2-D spatial displays is examined. It is proposed that the geometry of this information is best understood not within the traditional framework of perspective trigonometry, but in terms of the structure of qualitative relations defined by congruences among intrinsic geometric relations in images of surfaces. The basic concepts of this geometrical theory are outlined.

  14. Colavita dominance effect revisited: the effect of semantic congruity.

    PubMed

    Stubblefield, Alexandra; Jacobs, Lauryn A; Kim, Yongju; Goolkasian, Paula

    2013-11-01

    To investigate the effect of semantic congruity on audiovisual target responses, participants detected a semantic concept that was embedded in a series of rapidly presented stimuli. The target concept appeared as a picture, an environmental sound, or both; and in bimodal trials, the audiovisual events were either consistent or inconsistent in their representation of a semantic concept. The results showed faster detection latencies to bimodal than to unimodal targets and a higher rate of missed targets when visual distractors were presented together with auditory targets, in comparison to auditory targets presented alone. The findings of Experiment 2 showed a cross-modal asymmetry, such that visual distractors were found to interfere with the accuracy of auditory target detection, but auditory distractors had no effect on either the speed or the accuracy of visual target detection. The biased-competition theory of attention (Desimone & Duncan Annual Review of Neuroscience 18: 1995; Duncan, Humphreys, & Ward Current Opinion in Neurobiology 7: 255-261 1997) was used to explain the findings because, when the saliency of the visual stimuli was reduced by the addition of a noise filter in Experiment 4, visual interference on auditory target detection was diminished. Additionally, the results showed faster and more accurate target detection when semantic concepts were represented in a visual rather than an auditory format.

  15. Retinotopic effects during spatial audio-visual integration.

    PubMed

    Meienbrock, A; Naumer, M J; Doehrmann, O; Singer, W; Muckli, L

    2007-02-01

    The successful integration of visual and auditory stimuli requires information about whether visual and auditory signals originate from corresponding places in the external world. Here we report crossmodal effects of spatially congruent and incongruent audio-visual (AV) stimulation. Visual and auditory stimuli were presented from one of four horizontal locations in external space. Seven healthy human subjects had to assess the spatial fit of a visual stimulus (i.e. a gray-scaled picture of a cartoon dog) and a simultaneously presented auditory stimulus (i.e. a barking sound). Functional magnetic resonance imaging (fMRI) revealed two distinct networks of cortical regions that processed preferentially either spatially congruent or spatially incongruent AV stimuli. Whereas earlier visual areas responded preferentially to incongruent AV stimulation, higher visual areas of the temporal and parietal cortex (left inferior temporal gyrus [ITG], right posterior superior temporal gyrus/sulcus [pSTG/STS], left intra-parietal sulcus [IPS]) and frontal regions (left pre-central gyrus [PreCG], left dorsolateral pre-frontal cortex [DLPFC]) responded preferentially to congruent AV stimulation. A position-resolved analysis revealed three robust cortical representations for each of the four visual stimulus locations in retinotopic visual regions corresponding to the representation of the horizontal meridian in area V1 and at the dorsal and ventral borders between areas V2 and V3. While these regions of interest (ROIs) did not show any significant effect of spatial congruency, we found subregions within ROIs in the right hemisphere that showed an incongruency effect (i.e. an increased fMRI signal during spatially incongruent compared to congruent AV stimulation). We interpret this finding as a correlate of spatially distributed recurrent feedback during mismatch processing: whenever a spatial mismatch is detected in multisensory regions (such as the IPS), processing resources are re

  16. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  17. Positive Emotion Facilitates Audiovisual Binding

    PubMed Central

    Kitamura, Miho S.; Watanabe, Katsumi; Kitagawa, Norimichi

    2016-01-01

    It has been shown that positive emotions can facilitate integrative and associative information processing in cognitive functions. The present study examined whether emotions in observers can also enhance perceptual integrative processes. We tested 125 participants in total for revealing the effects of emotional states and traits in observers on the multisensory binding between auditory and visual signals. Participants in Experiment 1 observed two identical visual disks moving toward each other, coinciding, and moving away, presented with a brief sound. We found that for participants with lower depressive tendency, induced happy moods increased the width of the temporal binding window of the sound-induced bounce percept in the stream/bounce display, while no effect was found for the participants with higher depressive tendency. In contrast, no effect of mood was observed for a simple audiovisual simultaneity discrimination task in Experiment 2. These results provide the first empirical evidence of a dependency of multisensory binding upon emotional states and traits, revealing that positive emotions can facilitate the multisensory binding processes at a perceptual level. PMID:26834585

  18. An investigation of the time course of category congruence and priming distance effects in number classification tasks.

    PubMed

    Perry, Jason R; Lupker, Stephen J

    2012-09-01

    The issue investigated in the present research is the nature of the information that is responsible for producing masked priming effects (e.g., semantic information or stimulus-response [S-R] associations) when responding to number stimuli. This issue was addressed by assessing both the magnitude of the category congruence (priming) effect and the nature of the priming distance effect across trials using single-digit primes and targets. Participants made either magnitude (i.e., whether the number presented was larger or smaller than 5) or identification (i.e., press the left button if the number was either a 1, 2, 3, or 4 or the right button if the number was either a 6, 7, 8, or 9) judgments. The results indicated that, regardless of task instruction, there was a clear priming distance effect and a significantly increasing category congruence effect. These results indicated that both semantic activation and S-R associations play important roles in producing masked priming effects.

  19. Lip movements affect infants' audiovisual speech perception.

    PubMed

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  20. The Action-Sentence Compatibility Effect in ASL: the role of semantics vs. perception*

    PubMed Central

    SECORA, KRISTEN; EMMOREY, KAREN

    2015-01-01

    Embodied theories of cognition propose that humans use sensorimotor systems in processing language. The Action-Sentence Compatibility Effect (ACE) refers to the finding that motor responses are facilitated after comprehending sentences that imply movement in the same direction. In sign languages there is a potential conflict between sensorimotor systems and linguistic semantics: movement away from the signer is perceived as motion toward the comprehender. We examined whether perceptual processing of sign movement or verb semantics modulate the ACE. Deaf ASL signers performed a semantic judgment task while viewing signed sentences expressing toward or away motion. We found a significant congruency effect relative to the verb’s semantics rather than to the perceived motion. This result indicates that (a) the motor system is involved in the comprehension of a visual–manual language, and (b) motor simulations for sign language are modulated by verb semantics rather than by the perceived visual motion of the hands. PMID:26052352

  1. 29 CFR 2.12 - Audiovisual coverage permitted.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the...

  2. Govt. Pubs: U.S. Government Produced Audiovisual Materials.

    ERIC Educational Resources Information Center

    Korman, Richard

    1981-01-01

    Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…

  3. Focusing of geodesic congruences in an accelerated expanding Universe

    SciTech Connect

    Albareti, F.D.; Cembranos, J.A.R.; Cruz-Dombriz, A. de la E-mail: cembra@fis.ucm.es

    2012-12-01

    We study the accelerated expansion of the Universe through its consequences on a congruence of geodesics. We make use of the Raychaudhuri equation which describes the evolution of the expansion rate for a congruence of timelike or null geodesics. In particular, we focus on the space-time geometry contribution to this equation. By straightforward calculation from the metric of a Robertson-Walker cosmological model, it follows that in an accelerated expanding Universe the space-time contribution to the Raychaudhuri equation is positive for the fundamental congruence, favoring a non-focusing of the congruence of geodesics. However, the accelerated expansion of the present Universe does not imply a tendency of the fundamental congruence to diverge. It is shown that this is in fact the case for certain congruences of timelike geodesics without vorticity. Therefore, the focusing of geodesics remains feasible in an accelerated expanding Universe. Furthermore, a negative contribution to the Raychaudhuri equation from space-time geometry which is usually interpreted as the manifestation of the attractive character of gravity is restored in an accelerated expanding Robertson-Walker space-time at high speeds.

  4. The semantic basis of taste-shape associations.

    PubMed

    Velasco, Carlos; Woods, Andy T; Marks, Lawrence E; Cheok, Adrian David; Spence, Charles

    2016-01-01

    Previous research shows that people systematically match tastes with shapes. Here, we assess the extent to which matched taste and shape stimuli share a common semantic space and whether semantically congruent versus incongruent taste/shape associations can influence the speed with which people respond to both shapes and taste words. In Experiment 1, semantic differentiation was used to assess the semantic space of both taste words and shapes. The results suggest a common semantic space containing two principal components (seemingly, intensity and hedonics) and two principal clusters, one including round shapes and the taste word "sweet," and the other including angular shapes and the taste words "salty," "sour," and "bitter." The former cluster appears more positively-valenced whilst less potent than the latter. In Experiment 2, two speeded classification tasks assessed whether congruent versus incongruent mappings of stimuli and responses (e.g., sweet with round versus sweet with angular) would influence the speed of participants' responding, to both shapes and taste words. The results revealed an overall effect of congruence with congruent trials yielding faster responses than their incongruent counterparts. These results are consistent with previous evidence suggesting a close relation (or crossmodal correspondence) between tastes and shape curvature that may derive from common semantic coding, perhaps along the intensity and hedonic dimensions. PMID:26966646

  5. The relation between body semantics and spatial body representations.

    PubMed

    van Elk, Michiel; Blanke, Olaf

    2011-11-01

    The present study addressed the relation between body semantics (i.e. semantic knowledge about the human body) and spatial body representations, by presenting participants with word pairs, one below the other, referring to body parts. The spatial position of the word pairs could be congruent (e.g. EYE / MOUTH) or incongruent (MOUTH / EYE) with respect to the spatial position of the words' referents. In addition, the spatial distance between the words' referents was varied, resulting in word pairs referring to body parts that are close (e.g. EYE / MOUTH) or far in space (e.g. EYE / FOOT). A spatial congruency effect was observed when subjects made an iconicity judgment (Experiments 2 and 3) but not when making a semantic relatedness judgment (Experiment 1). In addition, when making a semantic relatedness judgment (Experiment 1) reaction times increased with increased distance between the body parts but when making an iconicity judgment (Experiments 2 and 3) reaction times decreased with increased distance. These findings suggest that the processing of body-semantics results in the activation of a detailed visuo-spatial body representation that is modulated by the specific task requirements. We discuss these new data with respect to theories of embodied cognition and body semantics.

  6. The semantic basis of taste-shape associations

    PubMed Central

    Woods, Andy T.; Marks, Lawrence E.; Cheok, Adrian David; Spence, Charles

    2016-01-01

    Previous research shows that people systematically match tastes with shapes. Here, we assess the extent to which matched taste and shape stimuli share a common semantic space and whether semantically congruent versus incongruent taste/shape associations can influence the speed with which people respond to both shapes and taste words. In Experiment 1, semantic differentiation was used to assess the semantic space of both taste words and shapes. The results suggest a common semantic space containing two principal components (seemingly, intensity and hedonics) and two principal clusters, one including round shapes and the taste word “sweet,” and the other including angular shapes and the taste words “salty,” “sour,” and “bitter.” The former cluster appears more positively-valenced whilst less potent than the latter. In Experiment 2, two speeded classification tasks assessed whether congruent versus incongruent mappings of stimuli and responses (e.g., sweet with round versus sweet with angular) would influence the speed of participants’ responding, to both shapes and taste words. The results revealed an overall effect of congruence with congruent trials yielding faster responses than their incongruent counterparts. These results are consistent with previous evidence suggesting a close relation (or crossmodal correspondence) between tastes and shape curvature that may derive from common semantic coding, perhaps along the intensity and hedonic dimensions. PMID:26966646

  7. Higher Language Ability is Related to Angular Gyrus Activation Increase During Semantic Processing, Independent of Sentence Incongruency.

    PubMed

    Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria

    2016-01-01

    This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task-which tapped language comprehension and inference, and modulated sentence congruency-employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation. PMID

  8. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models.

    PubMed

    Park, Gibeom; Tani, Jun

    2015-12-01

    The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human.

  9. Measuring Stratigraphic Congruence Across Trees, Higher Taxa, and Time.

    PubMed

    O'Connor, Anne; Wills, Matthew A

    2016-09-01

    The congruence between the order of cladistic branching and the first appearance dates of fossil lineages can be quantified using a variety of indices. Good matching is a prerequisite for the accurate time calibration of trees, while the distribution of congruence indices across large samples of cladograms has underpinned claims about temporal and taxonomic patterns of completeness in the fossil record. The most widely used stratigraphic congruence indices are the stratigraphic consistency index (SCI), the modified Manhattan stratigraphic measure (MSM*), and the gap excess ratio (GER) (plus its derivatives; the topological GER and the modified GER). Many factors are believed to variously bias these indices, with several empirical and simulation studies addressing some subset of the putative interactions. This study combines both approaches to quantify the effects (on all five indices) of eight variables reasoned to constrain the distribution of possible values (the number of taxa, tree balance, tree resolution, range of first occurrence (FO) dates, center of gravity of FO dates, the variability of FO dates, percentage of extant taxa, and percentage of taxa with no fossil record). Our empirical data set comprised 647 published animal and plant cladograms spanning the entire Phanerozoic, and for these data we also modeled the effects of mean age of FOs (as a proxy for clade age), the taxonomic rank of the clade, and the higher taxonomic group to which it belonged. The center of gravity of FO dates had not been investigated hitherto, and this was found to correlate most strongly with some measures of stratigraphic congruence in our empirical study (top-heavy clades had better congruence). The modified GER was the index least susceptible to bias. We found significant differences across higher taxa for all indices; arthropods had lower congruence and tetrapods higher congruence. Stratigraphic congruence-however measured-also varied throughout the Phanerozoic, reflecting

  10. Emotional speech processing at the intersection of prosody and semantics.

    PubMed

    Schwartz, Rachel; Pell, Marc D

    2012-01-01

    The ability to accurately perceive emotions is crucial for effective social interaction. Many questions remain regarding how different sources of emotional cues in speech (e.g., prosody, semantic information) are processed during emotional communication. Using a cross-modal emotional priming paradigm (Facial affect decision task), we compared the relative contributions of processing utterances with single-channel (prosody-only) versus multi-channel (prosody and semantic) cues on the perception of happy, sad, and angry emotional expressions. Our data show that emotional speech cues produce robust congruency effects on decisions about an emotionally related face target, although no processing advantage occurred when prime stimuli contained multi-channel as opposed to single-channel speech cues. Our data suggest that utterances with prosodic cues alone and utterances with combined prosody and semantic cues both activate knowledge that leads to emotional congruency (priming) effects, but that the convergence of these two information sources does not always heighten access to this knowledge during emotional speech processing. PMID:23118868

  11. Co-speech gestures influence neural activity in brain regions associated with processing semantic information.

    PubMed

    Dick, Anthony Steven; Goldin-Meadow, Susan; Hasson, Uri; Skipper, Jeremy I; Small, Steven L

    2009-11-01

    Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech. PMID:19384890

  12. Planning and Producing Audiovisual Materials. Third Edition.

    ERIC Educational Resources Information Center

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  13. Audiovisual Prosody and Feeling of Knowing

    ERIC Educational Resources Information Center

    Swerts, M.; Krahmer, E.

    2005-01-01

    This paper describes two experiments on the role of audiovisual prosody for signalling and detecting meta-cognitive information in question answering. The first study consists of an experiment, in which participants are asked factual questions in a conversational setting, while they are being filmed. Statistical analyses bring to light that the…

  14. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.

  15. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  16. Active Methodology in the Audiovisual Communication Degree

    ERIC Educational Resources Information Center

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  17. Health Science Audiovisuals in Online Databases.

    ERIC Educational Resources Information Center

    Van Camp, Ann

    1980-01-01

    Provides descriptions of 14 databases that contain citations to audiovisual instructional materials: AGRICOLA, AVLINE, AVMARC, BIOETHICSLINE, CATLINE, CHILD ABUSE AND NEGLECT, DRUG INFO, ERIC, EXCEPTIONAL CHILD EDUCATION RESOURCES (ECER), LIBCON, NICEM, NICSEM/NIMIS, NIMH, and OCLC. Information for each includes subject content, update frequency,…

  18. Audio-Visual Materials for Chinese Studies.

    ERIC Educational Resources Information Center

    Ching, Eugene, Comp.; Ching, Nora C., Comp.

    This publication is designed for teachers of Chinese language and culture who are interested in using audiovisual materials to supplement classroom instruction. The listings objectively present materials which are available; the compilers have not attempted to evaluate them. Content includes historical studies, techniques of brush painting, myths,…

  19. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  20. A Selection of Audiovisual Materials on Disabilities.

    ERIC Educational Resources Information Center

    Mayo, Kathleen; Rider, Sheila

    Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…

  1. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  2. Audiovisual Instruction in Pediatric Pharmacy Practice.

    ERIC Educational Resources Information Center

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  3. Audiovisual Facilities in Schools in Japan Today.

    ERIC Educational Resources Information Center

    Ministry of Education, Tokyo (Japan).

    This paper summarizes the findings of a national survey conducted for the Ministry of Education, Science, and Culture in 1986 to determine the kinds of audiovisual equipment available in Japanese schools, together with the rate of diffusion for the various types of equipment, the amount of teacher participation in training for their use, and the…

  4. The Status of Audiovisual Materials in Networking.

    ERIC Educational Resources Information Center

    Coty, Patricia Ann

    1983-01-01

    The role of networks in correcting inadequate bibliographic control for audiovisual materials is discussed, citing efforts of Project Media Base, National Information Center for Educational Media, Consortium of University Film Centers, National Library of Medicine, National Agricultural Library, National Film Board of Canada, and bibliographic…

  5. Reduced audiovisual recalibration in the elderly.

    PubMed

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  6. Reduced audiovisual recalibration in the elderly

    PubMed Central

    Chan, Yu Man; Pianta, Michael J.; McKendrick, Allison M.

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22–32 years old) and 15 older (64–74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age. PMID:25221508

  7. Measuring Stratigraphic Congruence Across Trees, Higher Taxa, and Time

    PubMed Central

    O'Connor, Anne; Wills, Matthew A.

    2016-01-01

    The congruence between the order of cladistic branching and the first appearance dates of fossil lineages can be quantified using a variety of indices. Good matching is a prerequisite for the accurate time calibration of trees, while the distribution of congruence indices across large samples of cladograms has underpinned claims about temporal and taxonomic patterns of completeness in the fossil record. The most widely used stratigraphic congruence indices are the stratigraphic consistency index (SCI), the modified Manhattan stratigraphic measure (MSM*), and the gap excess ratio (GER) (plus its derivatives; the topological GER and the modified GER). Many factors are believed to variously bias these indices, with several empirical and simulation studies addressing some subset of the putative interactions. This study combines both approaches to quantify the effects (on all five indices) of eight variables reasoned to constrain the distribution of possible values (the number of taxa, tree balance, tree resolution, range of first occurrence (FO) dates, center of gravity of FO dates, the variability of FO dates, percentage of extant taxa, and percentage of taxa with no fossil record). Our empirical data set comprised 647 published animal and plant cladograms spanning the entire Phanerozoic, and for these data we also modeled the effects of mean age of FOs (as a proxy for clade age), the taxonomic rank of the clade, and the higher taxonomic group to which it belonged. The center of gravity of FO dates had not been investigated hitherto, and this was found to correlate most strongly with some measures of stratigraphic congruence in our empirical study (top-heavy clades had better congruence). The modified GER was the index least susceptible to bias. We found significant differences across higher taxa for all indices; arthropods had lower congruence and tetrapods higher congruence. Stratigraphic congruence—however measured—also varied throughout the Phanerozoic

  8. False recollections and the congruence of suggested information.

    PubMed

    Pérez-Mata, Nieves; Diges, Margarita

    2007-10-01

    In two experiments, congruence of postevent information was manipulated in order to explore its role in the misinformation effect. Congruence of a detail was empirically defined as its compatibility (or match) with a concrete event. Based on this idea it was predicted that a congruent suggested detail would be more easily accepted than an incongruent one. In Experiments 1 and 2 two factors(congruence and truth value ) were manipulated within-subjects, and a two-alternative forced-choice recognition test was used followed by phenomenological judgements. Furthermore, in the second experiment participants were asked to describe four critical items (two seen and two suggested details)to explore differences and similarities between real and unreal memories. Both experiments clearly showed that the congruence of false information caused a robust misinformation effect, so that congruent information was much more accepted than false incongruent information. Furthermore, congruence increased the descriptive and phenomenological similarities between perceived and suggested memories, thus contributing to the misleading effect. PMID:17891682

  9. Balloons and bavoons versus spikes and shikes: ERPs reveal shared neural processes for shape-sound-meaning congruence in words, and shape-sound congruence in pseudowords.

    PubMed

    Sučević, Jelena; Savić, Andrej M; Popović, Mirjana B; Styles, Suzy J; Ković, Vanja

    2015-01-01

    There is something about the sound of a pseudoword like takete that goes better with a spiky, than a curvy shape (Köhler, 1929:1947). Yet despite decades of research into sound symbolism, the role of this effect on real words in the lexicons of natural languages remains controversial. We report one behavioural and one ERP study investigating whether sound symbolism is active during normal language processing for real words in a speaker's native language, in the same way as for novel word forms. The results indicate that sound-symbolic congruence has a number of influences on natural language processing: Written forms presented in a congruent visual context generate more errors during lexical access, as well as a chain of differences in the ERP. These effects have a very early onset (40-80 ms, 100-160 ms, 280-320 ms) and are later overshadowed by familiar types of semantic processing, indicating that sound symbolism represents an early sensory-co-activation effect.

  10. Syntactic processing in the absence of awareness and semantics.

    PubMed

    Hung, Shao-Min; Hsieh, Po-Jang

    2015-10-01

    The classical view that multistep rule-based operations require consciousness has recently been challenged by findings that both multiword semantic processing and multistep arithmetic equations can be processed unconsciously. It remains unclear, however, whether pure rule-based cognitive processes can occur unconsciously in the absence of semantics. Here, after presenting 2 words consciously, we suppressed the third with continuous flash suppression. First, we showed that the third word in the subject-verb-verb format (syntactically incongruent) broke suppression significantly faster than the third word in the subject-verb-object format (syntactically congruent). Crucially, the same effect was observed even with sentences composed of pseudowords (pseudo subject-verb-adjective vs. pseudo subject-verb-object) without any semantic information. This is the first study to show that syntactic congruency can be processed unconsciously in the complete absence of semantics. Our findings illustrate how abstract rule-based processing (e.g., syntactic categories) can occur in the absence of visual awareness, even when deprived of semantics.

  11. Congruency of gaze metrics in action, imagery and action observation.

    PubMed

    Causer, Joe; McCormick, Sheree A; Holmes, Paul S

    2013-01-01

    The aim of this paper is to provide a review of eye movements during action execution, action observation, and movement imagery. Furthermore, the paper highlights aspects of congruency in gaze metrics between these states. The implications of the imagery, observation, and action gaze congruency are discussed in terms of motor learning and rehabilitation. Future research directions are outlined in order to further the understanding of shared gaze metrics between overt and covert states. Suggestions are made for how researchers and practitioners can structure action observation and movement imagery interventions to maximize (re)learning. PMID:24068996

  12. Organizational effectiveness. Primary care and the congruence model.

    PubMed

    Eiser, A R; Eiser, B J

    1996-10-01

    The congruence model is a framework used to analyze organizational strengths and weaknesses and pinpoint specific areas for improving effectiveness. This article provides an overview of organizations as open systems, with examples in the primary care arena. It explains and applies the congruence model in the context of primary care issues and functions, including methods by which the model can be used to diagnose organizational problems and generate solutions. Changes needed in primary care due to the managed care environment, and areas of potential problems and sensitivities requiring organizational changes to meet market and regulatory demands now placed on PCOs are examined.

  13. Semantics via Machine Translation

    ERIC Educational Resources Information Center

    Culhane, P. T.

    1977-01-01

    Recent experiments in machine translation have given the semantic elements of collocation in Russian more objective criteria. Soviet linguists in search of semantic relationships have attempted to devise a semantic synthesis for construction of a basic language for machine translation. One such effort is summarized. (CHK)

  14. SEMANTICS AND CRITICAL READING.

    ERIC Educational Resources Information Center

    FLANIGAN, MICHAEL C.

    PROFICIENCY IN CRITICAL READING CAN BE ACCELERATED BY MAKING STUDENTS AWARE OF VARIOUS SEMANTIC DEVICES THAT HELP CLARIFY MEANINGS AND PURPOSES. EXCERPTS FROM THE ARTICLE "TEEN-AGE CORRUPTION" FROM THE NINTH-GRADE SEMANTICS UNIT WRITTEN BY THE PROJECT ENGLISH DEMONSTRATION CENTER AT EUCLID, OHIO, ARE USED TO ILLUSTRATE HOW SEMANTICS RELATE TO…

  15. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  16. Effects of Worker Classification, Crystallization, and Job Autonomy on Congruence-Satisfaction Relationships.

    ERIC Educational Resources Information Center

    Obermesik, John W.; Beehr, Terry A.

    A majority of the congruence-satisfaction literature has used interest measures based on Holland's theory, although the measures' accuracy in predicting job satisfaction is questionable. Divergent findings among studies on occupational congruence-job satisfaction may be due to ineffective measures of congruence and job satisfaction and lack of…

  17. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  18. Epistemological Belief Congruency in Mathematics between Vocational Technology Students and Their Instructors

    ERIC Educational Resources Information Center

    Schommer-Aikins, Marlene; Unruh, Susan; Morphew, Jason

    2015-01-01

    Three questions were addressed in this study. Is there evidence of epistemological beliefs congruency between students and their instructor? Do students' epistemological beliefs, students' epistemological congruence, or both predict mathematical anxiety? Do students' epistemological beliefs, students' epistemological congruence, or both predict…

  19. Attributes of Quality in Audiovisual Materials for Health Professionals.

    ERIC Educational Resources Information Center

    Suter, Emanuel; Waddell, Wendy H.

    1981-01-01

    Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)

  20. Diminished sensitivity of audiovisual temporal order in autism spectrum disorder.

    PubMed

    de Boer-Schellekens, Liselotte; Eussen, Mart; Vroomen, Jean

    2013-01-01

    We examined sensitivity of audiovisual temporal order in adolescents with autism spectrum disorder (ASD) using an audiovisual temporal order judgment (TOJ) task. In order to assess domain-specific impairments, the stimuli varied in social complexity from simple flash/beeps to videos of a handclap or a speaking face. Compared to typically-developing controls, individuals with ASD were generally less sensitive in judgments of audiovisual temporal order (larger just noticeable differences, JNDs), but there was no specific impairment with social stimuli. This suggests that people with ASD suffer from a more general impairment in audiovisual temporal processing.

  1. Unconscious Congruency Priming from Unpracticed Words Is Modulated by Prime-Target Semantic Relatedness

    ERIC Educational Resources Information Center

    Ortells, Juan J.; Mari-Beffa, Paloma; Plaza-Ayllon, Vanesa

    2013-01-01

    Participants performed a 2-choice categorization task on visible word targets that were preceded by novel (unpracticed) prime words. The prime words were presented for 33 ms and followed either immediately (Experiments 1-3) or after a variable delay (Experiments 1 and 4) by a pattern mask. Both subjective and objective measures of prime visibility…

  2. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention. PMID:25341648

  3. Semantic networks of English.

    PubMed

    Miller, G A; Fellbaum, C

    1991-12-01

    Principles of lexical semantics developed in the course of building an on-line lexical database are discussed. The approach is relational rather than componential. The fundamental semantic relation is synonymy, which is required in order to define the lexicalized concepts that words can be used to express. Other semantic relations between these concepts are then described. No single set of semantic relations or organizational structure is adequate for the entire lexicon: nouns, adjectives, and verbs each have their own semantic relations and their own organization determined by the role they must play in the construction of linguistic messages.

  4. Higher Language Ability is Related to Angular Gyrus Activation Increase During Semantic Processing, Independent of Sentence Incongruency

    PubMed Central

    Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria

    2016-01-01

    This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task—which tapped language comprehension and inference, and modulated sentence congruency—employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation. PMID

  5. Partner choice, relationship satisfaction, and oral contraception: the congruency hypothesis.

    PubMed

    Roberts, S Craig; Little, Anthony C; Burriss, Robert P; Cobey, Kelly D; Klapilová, Kateřina; Havlíček, Jan; Jones, Benedict C; DeBruine, Lisa; Petrie, Marion

    2014-07-01

    Hormonal fluctuation across the menstrual cycle explains temporal variation in women's judgment of the attractiveness of members of the opposite sex. Use of hormonal contraceptives could therefore influence both initial partner choice and, if contraceptive use subsequently changes, intrapair dynamics. Associations between hormonal contraceptive use and relationship satisfaction may thus be best understood by considering whether current use is congruent with use when relationships formed, rather than by considering current use alone. In the study reported here, we tested this congruency hypothesis in a survey of 365 couples. Controlling for potential confounds (including relationship duration, age, parenthood, and income), we found that congruency in current and previous hormonal contraceptive use, but not current use alone, predicted women's sexual satisfaction with their partners. Congruency was not associated with women's nonsexual satisfaction or with the satisfaction of their male partners. Our results provide empirical support for the congruency hypothesis and suggest that women's sexual satisfaction is influenced by changes in partner preference associated with change in hormonal contraceptive use.

  6. Instructor/Student Congruence and the Ratings on Course Evaluations.

    ERIC Educational Resources Information Center

    Purohit, Anal A.; Magoon, A. J.

    The purpose of this study was to determine what relationships exist between course and instructor evaluations and student/instructor preferences regarding classroom instructions. The specific null hypothesis explored was: The congruencies on ratings of the personal preferences of students and the personal preferences of instructors will not be…

  7. Unmasking the dichoptic mask by sound: spatial congruency matters.

    PubMed

    Yang, Yung-Hao; Yeh, Su-Ling

    2014-04-01

    People tend to look toward where a sound occurs; however, the role of spatial congruency between sound and sight in the effect of sound facilitation on visual detection remains controversial. We propose that the role of spatial congruency depends on the reliability of the information provided by the facilitator; if it is relatively unreliable, adding spatially congruent information can help to unify different sensory inputs to compensate for this unreliability. To test this, we examine the influence of sound location on visual detection with a non-temporal task, presumably unfavorable for sound since it is better for temporal resolution, and predict that spatial congruency should matter in this situation. We used the continuous flash suppression paradigm that makes the visual stimuli invisible to keep the relationship of sound and sight opaque. The sound is on the same depth plane as the visual stimulus (the congruent condition) or on a different plane (the incongruent condition). The target was presented to one eye with luminance contrast gradually increased and continuously masked by flashed Mondrian masks presented to the other eye until the target was released from suppression. We found that sound facilitated visual detection (measured by released-from-suppression time) in the spatially congruent condition but not in the spatially incongruent condition. Together with previous findings in the literature, it is suggested that both task type and modality determine the reliability of the information for multisensory integration and thus determine whether spatial congruency is critical. PMID:24449005

  8. On the Homology of Congruence Subgroups and K3(Z)

    PubMed Central

    Lee, Ronnie; Szczarba, R. H.

    1975-01-01

    Let Γ(n;p) be the congruence subgroup of SL(n;Z) of level p. We study the homology and cohomology of Γ(n;p) as modules over SL(n;Fp) and apply our results to obtain an upper bound for the order of K3(Z). PMID:16592224

  9. Workplace Congruence and Occupational Outcomes among Social Service Workers

    PubMed Central

    Graham, John R.; Shier, Micheal L.; Nicholas, David

    2016-01-01

    Workplace expectations reflect an important consideration in employee experience. A higher prevalence of workplace congruence between worker and employer expectations has been associated with higher levels of productivity and overall workplace satisfaction across multiple occupational groups. Little research has investigated the relationship between workplace congruence and occupational health outcomes among social service workers. This study sought to better understand the extent to which occupational congruence contributes to occupational outcomes by surveying unionised social service workers (n = 674) employed with the Government of Alberta, Canada. Multiple regression analysis shows that greater congruence between workplace and worker expectations around workloads, workplace values and the quality of the work environment significantly: (i) decreases symptoms related to distress and secondary traumatic stress; (ii) decreases intentions to leave; and (iii) increases overall life satisfaction. The findings provide some evidence of areas within the workplace of large government run social welfare programmes that can be better aligned to worker expectations to improve occupational outcomes among social service workers. PMID:27559216

  10. MMPI--2 Code-Type Congruence of Injured Workers

    ERIC Educational Resources Information Center

    Livingston, Ronald B.; Jennings, Earl; Colotla, Victor A.; Reynolds, Cecil R.; Shercliffe, Regan J.

    2006-01-01

    In this study, the authors examined the stability of Minnesota Multiphasic Personality Inventory--2 (J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) code types in a sample of 94 injured workers with a mean test-retest interval of 21.3 months (SD = 14.1). Congruence rates for undefined code types were 34% for…

  11. Toward a Theory of Psychological Type Congruence for Advertisers.

    ERIC Educational Resources Information Center

    McBride, Michael H.; And Others

    Focusing on the impact of advertisers' persuasive selling messages on consumers, this paper discusses topics relating to the theory of psychological type congruence. Based on an examination of persuasion theory and relevant psychological concepts, including recent cognitive stability and personality and needs theory and the older concept of…

  12. Effects of Congruence between Counselor Interpretations and Client Beliefs.

    ERIC Educational Resources Information Center

    Claiborn, Charles D.; And Others

    1981-01-01

    In a study of undergraduates with procrastination problems, clients in the congruence conditions showed greater expectation and tendency toward change than those in the discrepancy conditions. A stronger effect, however, was due to the interpretations alone, which substantially changed clients' beliefs about the cause and controllability of their…

  13. Person-Environment Congruence in Residences for the Elderly.

    ERIC Educational Resources Information Center

    Kiyak, Havva Asuman

    As the population of older Americans continues to increase, more and more elderly persons will seek diverse living arrangements. Residential facilities must be designed to meet their needs. Person-environment congruence may be an important determinant of residential satisfaction and relocation stress for the elderly. Residents (N=107) of eight…

  14. Solutions to some congruence equations via suborbital graphs.

    PubMed

    Güler, Bahadır Özgür; Kör, Tuncay; Şanlı, Zeynep

    2016-01-01

    We relate the connection between the sizes of circuits in suborbital graph for the normalizer of [Formula: see text] in PSL(2,[Formula: see text]) and the congruence equations arising from related group action. We give a number theoretic result which says that all prime divisors of [Formula: see text] for any integer u must be congruent to [Formula: see text]. PMID:27563522

  15. Congruence between Disabled Elders and Their Primary Caregivers

    ERIC Educational Resources Information Center

    Horowitz, Amy; Goodman, Caryn R.; Reinhardt, Joann P.

    2004-01-01

    Purpose: This study examines the extent and independent correlates of congruence between disabled elders and their caregivers on several aspects of the caregiving experience. Design and Methods: Participants were 117 visually impaired elders and their caregivers. Correlational analyses, kappa statistics, and paired t tests were used to examine the…

  16. Improving Students' Attitudes toward Science Using Instructional Congruence

    ERIC Educational Resources Information Center

    Zain, Ahmad Nurulazam Md; Samsudin, Mohd Ali; Rohandi, Robertus; Jusoh, Azman

    2010-01-01

    The objective of this study was to improve students' attitudes toward science using instructional congruence. The study was conducted in Malaysia, in three low-performing secondary schools in the state of Penang. Data collected with an Attitudes in Science instrument were analysed using Rasch modeling. Qualitative data based on the reflections of…

  17. Biomedical semantics in the Semantic Web

    PubMed Central

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences? We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  18. Biomedical semantics in the Semantic Web.

    PubMed

    Splendiani, Andrea; Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott

    2011-03-07

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th.

  19. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  20. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  1. Audiovisual Integration in High Functioning Adults with Autism

    ERIC Educational Resources Information Center

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  2. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    ERIC Educational Resources Information Center

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  3. Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli.

    PubMed

    Vroomen, Jean; Stekelenburg, Jeroen J

    2010-07-01

    The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal occurrence of this audiovisual event was made predictable by two moving disks that touched the rectangle. Local autoregressive average source estimation indicated that this audiovisual interaction may be related to integrative processing in auditory areas. When the moving disks did not precede the audiovisual stimulus--making the onset unpredictable--there was no N1 reduction. In Experiment 2, the predictability of the leading visual signal was manipulated by introducing a temporal asynchrony between the audiovisual event and the collision of moving disks. Audiovisual events occurred either at the moment, before (too "early"), or after (too "late") the disks collided on the rectangle. When asynchronies varied from trial to trial--rendering the moving disks unreliable temporal predictors of the audiovisual event--the N1 reduction was abolished. These results demonstrate that the N1 suppression is induced by visual information that both precedes and reliably predicts audiovisual onset, without a necessary link to human action-related neural mechanisms.

  4. Audiovisual Processing in Children with and without Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Mongillo, Elizabeth A.; Irwin, Julia R.; Whalen, D. H.; Klaiman, Cheryl; Carter, Alice S.; Schultz, Robert T.

    2008-01-01

    Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces…

  5. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of getting to…

  6. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  7. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    ERIC Educational Resources Information Center

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  8. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  9. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  10. Directory of Head Start Audiovisual Professional Training Materials.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.

    The directory contains over 265 annotated listings of audiovisual professional training materials related to the education and care of preschool handicapped children. Noted in the introduction are sources of the contents, such as lists of audiovisual materials disseminated by a hearing/speech center, and instructions for use of the directory.…

  11. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  12. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  13. Evaluation of differences in quality of experience features for test stimuli of good-only and bad-only overall audiovisual quality

    NASA Astrophysics Data System (ADS)

    Strohmeier, Dominik; Kunze, Kristina; Göbel, Klemens; Liebetrau, Judith

    2013-01-01

    Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing QoE differences.

  14. Vicarious Audiovisual Learning in Perfusion Education

    PubMed Central

    Rath, Thomas E.; Holt, David W.

    2010-01-01

    Abstract: Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video. These modules described the setup and operation of the MAQUET ROTAFLOW standalone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today’s perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p < .05). The same was true for test #2 where video learners (n = 10) had an average score of 77% while text learners (n = 9) scored 60% (p < .05). Survey results indicated video learners were more satisfied with their learning module than text learners. Vicarious audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important

  15. The Natural Statistics of Audiovisual Speech

    PubMed Central

    Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A.

    2009-01-01

    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver. PMID:19609344

  16. Audiovisual Media in Japan Today. The Ministry of Education's 1986 Survey on Audiovisual Media. AVE in Japan No. 26.

    ERIC Educational Resources Information Center

    Japan Audio-Visual Education Association, Tokyo.

    Based on the Ministry of Education, Science and Culture's 1986 survey of "Audiovisual Facilities in Schools and Social Education Institutions," this summary of the current status of the diffusion and utilization of audiovisual materials and equipment in Japan pays particular attention to public and private schools. Social education institutions…

  17. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... audiovisuals. 3015.200 Section 3015.200 Agriculture Regulations of the Department of Agriculture (Continued... Miscellaneous § 3015.200 Acknowledgement of support on publications and audiovisuals. (a) Definitions. Appendix A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b)...

  18. Determinantal Quintics and Mirror Symmetry of Reye Congruences

    NASA Astrophysics Data System (ADS)

    Hosono, Shinobu; Takagi, Hiromichi

    2014-08-01

    We study a certain family of determinantal quintic hypersurfaces in whose singularities are similar to the well-studied Barth-Nieto quintic. Smooth Calabi-Yau threefolds with Hodge numbers ( h 1,1, h 2,1) = (52, 2) are obtained by taking crepant resolutions of the singularities. It turns out that these smooth Calabi-Yau threefolds are in a two dimensional mirror family to the complete intersection Calabi-Yau threefolds in which have appeared in our previous study of Reye congruences in dimension three. We compactify the two dimensional family over and reproduce the mirror family to the Reye congruences. We also determine the monodromy of the family over completely. Our calculation shows an example of the orbifold mirror construction with a trivial orbifold group.

  19. Personalizing politics: a congruency model of political preference.

    PubMed

    Caprara, Gian Vittorio; Zimbardo, Philip G

    2004-10-01

    Modern politics become personalized as individual characteristics of voters and candidates assume greater importance in political discourse. Although personalities of candidates capture center stage and become the focus of voters' preferences, individual characteristics of voters, such as their traits and values, become decisive for political choice. The authors' findings reveal that people vote for candidates whose personality traits are in accordance with the ideology of their preferred political party. They also select politicians whose traits match their own traits. Moreover, voters' traits match their own values. The authors outline a congruency model of political preference that highlights the interacting congruencies among voters' self-reported traits and values, voters' perceptions of leaders' personalities, politicians' self-reported traits, and programs of favored political coalitions.

  20. Distribution, congruence, and hotspots of higher plants in China

    NASA Astrophysics Data System (ADS)

    Zhao, Lina; Li, Jinya; Liu, Huiyuan; Qin, Haining

    2016-01-01

    Identifying biodiversity hotspots has become a central issue in setting up priority protection areas, especially as financial resources for biological diversity conservation are limited. Taking China’s Higher Plants Red List (CHPRL), including Bryophytes, Ferns, Gymnosperms, Angiosperms, as the data source, we analyzed the geographic patterns of species richness, endemism, and endangerment via data processing at a fine grid-scale with an average edge length of 30 km based on three aspects of richness information: species richness, endemic species richness, and threatened species richness. We sought to test the accuracy of hotspots used in identifying conservation priorities with regard to higher plants. Next, we tested the congruence of the three aspects and made a comparison of the similarities and differences between the hotspots described in this paper and those in previous studies. We found that over 90% of threatened species in China are concentrated. While a high spatial congruence is observed among the three measures, there is a low congruence between two different sets of hotspots. Our results suggest that biodiversity information should be considered when identifying biological hotspots. Other factors, such as scales, should be included as well to develop biodiversity conservation plans in accordance with the region’s specific conditions.

  1. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  2. Distribution, congruence, and hotspots of higher plants in China.

    PubMed

    Zhao, Lina; Li, Jinya; Liu, Huiyuan; Qin, Haining

    2016-01-01

    Identifying biodiversity hotspots has become a central issue in setting up priority protection areas, especially as financial resources for biological diversity conservation are limited. Taking China's Higher Plants Red List (CHPRL), including Bryophytes, Ferns, Gymnosperms, Angiosperms, as the data source, we analyzed the geographic patterns of species richness, endemism, and endangerment via data processing at a fine grid-scale with an average edge length of 30 km based on three aspects of richness information: species richness, endemic species richness, and threatened species richness. We sought to test the accuracy of hotspots used in identifying conservation priorities with regard to higher plants. Next, we tested the congruence of the three aspects and made a comparison of the similarities and differences between the hotspots described in this paper and those in previous studies. We found that over 90% of threatened species in China are concentrated. While a high spatial congruence is observed among the three measures, there is a low congruence between two different sets of hotspots. Our results suggest that biodiversity information should be considered when identifying biological hotspots. Other factors, such as scales, should be included as well to develop biodiversity conservation plans in accordance with the region's specific conditions. PMID:26750244

  3. Distribution, congruence, and hotspots of higher plants in China

    PubMed Central

    Zhao, Lina; Li, Jinya; Liu, Huiyuan; Qin, Haining

    2016-01-01

    Identifying biodiversity hotspots has become a central issue in setting up priority protection areas, especially as financial resources for biological diversity conservation are limited. Taking China’s Higher Plants Red List (CHPRL), including Bryophytes, Ferns, Gymnosperms, Angiosperms, as the data source, we analyzed the geographic patterns of species richness, endemism, and endangerment via data processing at a fine grid-scale with an average edge length of 30 km based on three aspects of richness information: species richness, endemic species richness, and threatened species richness. We sought to test the accuracy of hotspots used in identifying conservation priorities with regard to higher plants. Next, we tested the congruence of the three aspects and made a comparison of the similarities and differences between the hotspots described in this paper and those in previous studies. We found that over 90% of threatened species in China are concentrated. While a high spatial congruence is observed among the three measures, there is a low congruence between two different sets of hotspots. Our results suggest that biodiversity information should be considered when identifying biological hotspots. Other factors, such as scales, should be included as well to develop biodiversity conservation plans in accordance with the region’s specific conditions. PMID:26750244

  4. Action Congruency Influences Crowding When Discriminating Biological Motion Direction.

    PubMed

    Ikeda, Hanako; Watanabe, Katsumi

    2016-09-01

    Identification and discrimination of peripheral stimuli are often difficult when a few stimuli adjacent to the target are present (crowding). Our previous study showed that crowding occurs for walking direction discrimination of a biological motion stimulus. In the present study, we attempted to examine whether action congruency between the target and flankers would influence the crowding effect on biological motion stimuli. Each biological motion stimulus comprised one action (e.g., walking, throwing wastepaper, etc.) and was rotated in one of five directions around the vertical axis. In Experiment 1, observers discriminated between the directions of the target stimulus actions, which were surrounded by two flankers in the peripheral visual field. The crowding effect was stronger when the flankers performed the same action as the target and the directions differed. The congruency of action type enhanced the crowding effect in the direction-discrimination task. In Experiment 2, observers discriminated between action types of target stimuli. The crowding effect for the action-discrimination task was not modulated by the congruency of action direction. Thus, identical actions induced a larger crowding effect for action-direction discrimination, but congruent directions did not influence crowding for action-type discrimination. These results suggest that the processes involved in direction discrimination of biological motion are partially distinct from action discrimination processes.

  5. Categorization of Natural Dynamic Audiovisual Scenes

    PubMed Central

    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville

    2014-01-01

    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database. PMID:24788808

  6. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  7. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  8. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group. PMID:25324091

  9. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    PubMed Central

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  10. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans. PMID:27010716

  11. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.

  12. Minding the PS, queues, and PXQs: Uniformity of semantic processing across multiple stimulus types

    PubMed Central

    Laszlo, Sarah; Federmeier, Kara D.

    2009-01-01

    An assumption in the reading literature is that access to semantics is gated by stimulus properties such as orthographic regularity or familiarity. In the electrophysiological domain, this assumption has led to a debate about the features necessary to initiate semantic processing as indexed by theN400 event-related potential (ERP) component. To examine this, we recorded ERPs to sentences with endings that were familiar and legal (words), familiar and illegal (acronyms), or unfamiliar and illegal (consonant or vowel strings). N400 congruency effects (reduced negativity to expected relative to unexpected endings) were observed for words and acronyms; these were identical in size, timing, and scalp distribution. Notably, clear N400 potentials were also elicited by unfamiliar, illegal strings, suggesting that, at least in a verbal context, semantic access may be attempted for any letter string, regardless of familiarity or regularity. PMID:18221447

  13. The Semantic Learning Organization

    ERIC Educational Resources Information Center

    Sicilia, Miguel-Angel; Lytras, Miltiadis D.

    2005-01-01

    Purpose: The aim of this paper is introducing the concept of a "semantic learning organization" (SLO) as an extension of the concept of "learning organization" in the technological domain. Design/methodology/approach: The paper takes existing definitions and conceptualizations of both learning organizations and Semantic Web technology to develop…

  14. Communication: General Semantics Perspectives.

    ERIC Educational Resources Information Center

    Thayer, Lee, Ed.

    This book contains the edited papers from the eleventh International Conference on General Semantics, titled "A Search for Relevance." The conference questioned, as a central theme, the relevance of general semantics in a world of wars and human misery. Reacting to a fundamental Korzybski-ian principle that man's view of reality is distorted by…

  15. Audiovisual Interval Size Estimation Is Associated with Early Musical Training

    PubMed Central

    Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134

  16. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  17. Order Theoretical Semantic Recommendation

    SciTech Connect

    Joslyn, Cliff A.; Hogan, Emilie A.; Paulson, Patrick R.; Peterson, Elena S.; Stephan, Eric G.; Thomas, Dennis G.

    2013-07-23

    Mathematical concepts of order and ordering relations play multiple roles in semantic technologies. Discrete totally ordered data characterize both input streams and top-k rank-ordered recommendations and query output, while temporal attributes establish numerical total orders, either over time points or in the more complex case of startend temporal intervals. But also of note are the fully partially ordered data, including both lattices and non-lattices, which actually dominate the semantic strcuture of ontological systems. Scalar semantic similarities over partially-ordered semantic data are traditionally used to return rank-ordered recommendations, but these require complementation with true metrics available over partially ordered sets. In this paper we report on our work in the foundations of partial order measurement in ontologies, with application to top-k semantic recommendation in workflows.

  18. Future-saving audiovisual content for Data Science: Preservation of geoinformatics video heritage with the TIB|AV-Portal

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Plank, Margret; Ziedorn, Frauke

    2015-04-01

    of Science and Technology. The web-based portal allows for extended search capabilities based on enhanced metadata derived by automated video analysis. By combining state-of-the-art multimedia retrieval techniques such as speech-, text-, and image recognition with semantic analysis, content-based access to videos at the segment level is provided. Further, by using the open standard Media Fragment Identifier (MFID), a citable Digital Object Identifier is displayed for each video segment. In addition to the continuously growing footprint of contemporary content, the importance of vintage audiovisual information needs to be considered: This paper showcases the successful application of the TIB|AV-Portal in the preservation and provision of a newly discovered version of a GRASS GIS promotional video produced by US Army -Corps of Enginers Laboratory (US-CERL) in 1987. The video is provides insight into the constraints of the very early days of the GRASS GIS project, which is the oldest active Free and Open Source Software (FOSS) GIS project which has been active for over thirty years. GRASS itself has turned into a collaborative scientific platform and a repository of scientific peer-reviewed code and algorithm/knowledge hub for future generation of scientists [1]. This is a reference case for future preservation activities regarding semantic-enhanced Web 2.0 content from geospatial software projects within Academia and beyond. References: [1] Chemin, Y., Petras V., Petrasova, A., Landa, M., Gebbert, S., Zambelli, P., Neteler, M., Löwe, P.: GRASS GIS: a peer-reviewed scientific platform and future research Repository, Geophysical Research Abstracts, Vol. 17, EGU2015-8314-1, 2015 (submitted)

  19. Semantics, Pragmatics, and the Nature of Semantic Theories

    ERIC Educational Resources Information Center

    Spewak, David Charles, Jr.

    2013-01-01

    The primary concern of this dissertation is determining the distinction between semantics and pragmatics and how context sensitivity should be accommodated within a semantic theory. I approach the question over how to distinguish semantics from pragmatics from a new angle by investigating what the objects of a semantic theory are, namely…

  20. Automatic audiovisual integration in speech perception.

    PubMed

    Gentilucci, Maurizio; Cattaneo, Luigi

    2005-11-01

    Two experiments aimed to determine whether features of both the visual and acoustical inputs are always merged into the perceived representation of speech and whether this audiovisual integration is based on either cross-modal binding functions or on imitation. In a McGurk paradigm, observers were required to repeat aloud a string of phonemes uttered by an actor (acoustical presentation of phonemic string) whose mouth, in contrast, mimicked pronunciation of a different string (visual presentation). In a control experiment participants read the same printed strings of letters. This condition aimed to analyze the pattern of voice and the lip kinematics controlling for imitation. In the control experiment and in the congruent audiovisual presentation, i.e. when the articulation mouth gestures were congruent with the emission of the string of phones, the voice spectrum and the lip kinematics varied according to the pronounced strings of phonemes. In the McGurk paradigm the participants were unaware of the incongruence between visual and acoustical stimuli. The acoustical analysis of the participants' spoken responses showed three distinct patterns: the fusion of the two stimuli (the McGurk effect), repetition of the acoustically presented string of phonemes, and, less frequently, of the string of phonemes corresponding to the mouth gestures mimicked by the actor. However, the analysis of the latter two responses showed that the formant 2 of the participants' voice spectra always differed from the value recorded in the congruent audiovisual presentation. It approached the value of the formant 2 of the string of phonemes presented in the other modality, which was apparently ignored. The lip kinematics of the participants repeating the string of phonemes acoustically presented were influenced by the observation of the lip movements mimicked by the actor, but only when pronouncing a labial consonant. The data are discussed in favor of the hypothesis that features of both

  1. Clock synchronization by accelerated observers - Metric construction for arbitrary congruences of world lines

    NASA Technical Reports Server (NTRS)

    Henriksen, R. N.; Nelson, L. A.

    1985-01-01

    Clock synchronization in an arbitrarily accelerated observer congruence is considered. A general solution is obtained that maintains the isotropy and coordinate independence of the one-way speed of light. Attention is also given to various particular cases including, rotating disk congruence or ring congruence. An explicit, congruence-based spacetime metric is constructed according to Einstein's clock synchronization procedure and the equation for the geodesics of the space-time was derived using Hamilton-Jacobi method. The application of interferometric techniques (absolute phase radio interferometry, VLBI) to the detection of the 'global Sagnac effect' is also discussed.

  2. A Defense of Semantic Minimalism

    ERIC Educational Resources Information Center

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  3. A Semantic Graph Query Language

    SciTech Connect

    Kaplan, I L

    2006-10-16

    Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.

  4. Semantic Theory: A Linguistic Perspective.

    ERIC Educational Resources Information Center

    Nilsen, Don L. F.; Nilsen, Alleen Pace

    This book attempts to bring linguists and language teachers up to date on the latest developments in semantics. A survey of the role of semantics in linguistics and other academic areas is followed by a historical perspective of semantics in American linguistics. Various semantic models are discussed. Anomaly, ambiguity, and discourse are…

  5. Integration of Sentence-Level Semantic Information in Parafovea: Evidence from the RSVP-Flanker Paradigm

    PubMed Central

    Zhang, Wenjia; Li, Nan; Wang, Xiaoyue; Wang, Suiping

    2015-01-01

    During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading. PMID:26418230

  6. Integration of Sentence-Level Semantic Information in Parafovea: Evidence from the RSVP-Flanker Paradigm.

    PubMed

    Zhang, Wenjia; Li, Nan; Wang, Xiaoyue; Wang, Suiping

    2015-01-01

    During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading.

  7. Audiovisual signal compression: the 64/P codecs

    NASA Astrophysics Data System (ADS)

    Jayant, Nikil S.

    1996-02-01

    Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality

  8. An audiovisual database of English speech sounds

    NASA Astrophysics Data System (ADS)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  9. Behavioral Science Design for Audio-Visual Software Development

    ERIC Educational Resources Information Center

    Foster, Dennis L.

    1974-01-01

    A discussion of the basic structure of the behavioral audio-visual production which consists of objectives analysis, approach determination, technical production, fulfillment evaluation, program refinement, implementation, and follow-up. (Author)

  10. A measure for assessing the effects of audiovisual speech integration.

    PubMed

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  11. Audiovisual Materials and Programming for Children: A Long Tradition.

    ERIC Educational Resources Information Center

    Doll, Carol A.

    1992-01-01

    Explores the use of audiovisual materials in children's programing at the Seattle Public Library prior to 1920. Kinds of materials discussed include pictures, reflectoscopes, films, sound recordings, lantern slides, and stereographs. (17 references) (MES)

  12. Proper Use of Audio-Visual Aids: Essential for Educators.

    ERIC Educational Resources Information Center

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  13. Quantifying temporal ventriloquism in audiovisual synchrony perception.

    PubMed

    Kuling, Irene A; Kohlrausch, Armin; Juola, James F

    2013-10-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from rhythm perception. In this method, participants had to align the temporal position of a target in a rhythmic sequence of four markers. In the first experiment, target and markers consisted of a visual flash or an auditory noise burst, and all four combinations of target and marker modalities were tested. In the same-modality conditions, no temporal biases and a high precision of the adjusted temporal position of the target were observed. In the different-modality conditions, we found a systematic temporal bias of 25-30 ms. In the second part of the first and in a second experiment, we tested conditions in which audiovisual markers with different stimulus onset asynchronies (SOAs) between the two components and a visual target were used to quantify temporal ventriloquism. The adjusted target positions varied by up to about 50 ms and depended in a systematic way on the SOA and its proximity to the point of subjective synchrony. These data allowed testing different quantitative models. The most satisfying model, based on work by Maij, Brenner, and Smeets (Journal of Neurophysiology 102, 490-495, 2009), linked temporal ventriloquism and the percept of synchrony and was capable of adequately describing the results from the present study, as well as those of some earlier experiments. PMID:23868564

  14. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  15. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval

    PubMed Central

    Bonnici, Heidi M.; Richter, Franziska R.; Yazar, Yasemin

    2016-01-01

    Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. SIGNIFICANCE STATEMENT Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (An

  16. [Cultural heritage and audiovisual creation in the Arab world].

    PubMed

    Aziza, M

    1979-01-01

    Audiovisual creation is facing in Arab countries problems arising from the use of imported techniques in order to reconstitute or transform their own reality. Arab audiovisual producers see this technique as an easy and efficient way to reproduce reality or construct conventionally an artificial universe. Sometimes, audiovisuals have an absolute suggestion power; sometimes, these techniques are faced with total incredulity. From a diffusion point of view, audiovisuals in the Arab world have a very specific status. The effects of television, studied by western researchers in their cultural environment, are not reproduced in the same fashion in the Arab cultural world. In the Arab world, the word very often still competes successfully with the picture, even after the appearance and adoption of mass media. Finally, one must mention a very interesting situation resulting from a linguistic phenomenon which is specific to the Arab world: the existence of 2 communication languages, one noble but little used, the other dialectical but popular. In all Arab countries, the News, the most political program, is broadcasted in the classical language, despite the danger of meaning distortion in the least educated public. The reason is probably that the classical Arab language enjoys a sacred status. Arab audiovisuals are facing several obstacles to their total and autonomous realization. The contribution of the Arab audiovisual producers is relatively modest, compared to some other areas of cultural creation. Arab film-making is looking more and more for the cooperation of contemporary writers. Contemporary literature is a considerable source for the renewal of Arab audiovisual expression. A relationship between film and popular cultural heritage could be very usefully established in both directions. Audiovisuals should treat popular cultural manifestations as a global social fact on several significant levels. PMID:12261391

  17. Effect of perceptual load on semantic access by speech in children

    PubMed Central

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervè

    2013-01-01

    Purpose To examine whether semantic access by speech requires attention in children. Method Children (N=200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multi-modal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal had a low load, and the multi-modal had a high load [i.e., respectively naming pictures displayed 1) on a blank screen vs 2) below the talker’s face on his T-shirt]. Semantic content of distractors was manipulated to be related vs unrelated to picture (e.g., picture dog with distractors bear vs cheese). Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity limited attentional resources if irrelevant semantic-content manipulation influences naming times on both tasks despite variations in loads but dependent on attentional resources exhausted by higher load task if irrelevant content influences naming only on cross-modal (low load). Results Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds, but only on cross-modal in 4–5-year-olds. The addition of visual speech did not influence results on the multi-modal task. Conclusion Younger and older children differ in dependence on attentional resources for semantic access by speech. PMID:22896045

  18. The congruence of personal life values and work attitudes.

    PubMed

    Hyde, Rachel E; Weathington, Bart L

    2006-05-01

    The authors examined the congruence between an individual's personal-life value placement and attitudes at work. Specifically, they examined how people place value on work, family, religion, and themselves (the personal life values), respectively, and how that choice influences affect, commitment, conscientiousness, and honesty in the workplace (attitudes at work). The authors also examined and tested exploratory hypotheses by using both simple correlations and multiple linear regression analyses. Results suggested varying relationships between value placement and work attitudes. The authors discussed implications and directions for future research. PMID:17663357

  19. Trusting Crowdsourced Geospatial Semantics

    NASA Astrophysics Data System (ADS)

    Goodhue, P.; McNair, H.; Reitsma, F.

    2015-08-01

    The degree of trust one can place in information is one of the foremost limitations of crowdsourced geospatial information. As with the development of web technologies, the increased prevalence of semantics associated with geospatial information has increased accessibility and functionality. Semantics also provides an opportunity to extend indicators of trust for crowdsourced geospatial information that have largely focused on spatio-temporal and social aspects of that information. Comparing a feature's intrinsic and extrinsic properties to associated ontologies provides a means of semantically assessing the trustworthiness of crowdsourced geospatial information. The application of this approach to unconstrained semantic submissions then allows for a detailed assessment of the trust of these features whilst maintaining the descriptive thoroughness this mode of information submission affords. The resulting trust rating then becomes an attribute of the feature, providing not only an indication as to the trustworthiness of a specific feature but is able to be aggregated across multiple features to illustrate the overall trustworthiness of a dataset.

  20. Algebraic Semantics for Narrative

    ERIC Educational Resources Information Center

    Kahn, E.

    1974-01-01

    This paper uses discussion of Edmund Spenser's "The Faerie Queene" to present a theoretical framework for explaining the semantics of narrative discourse. The algebraic theory of finite automata is used. (CK)

  1. Grounding grammatical categories: attention bias in hand space influences grammatical congruency judgment of Chinese nominal classifiers

    PubMed Central

    Lobben, Marit; D’Ascenzo, Stefania

    2015-01-01

    Embodied cognitive theories predict that linguistic conceptual representations are grounded and continually represented in real world, sensorimotor experiences. However, there is an on-going debate on whether this also holds for abstract concepts. Grammar is the archetype of abstract knowledge, and therefore constitutes a test case against embodied theories of language representation. Former studies have largely focussed on lexical-level embodied representations. In the present study we take the grounding-by-modality idea a step further by using reaction time (RT) data from the linguistic processing of nominal classifiers in Chinese. We take advantage of an independent body of research, which shows that attention in hand space is biased. Specifically, objects near the hand consistently yield shorter RTs as a function of readiness for action on graspable objects within reaching space, and the same biased attention inhibits attentional disengagement. We predicted that this attention bias would equally apply to the graspable object classifier but not to the big object classifier. Chinese speakers (N = 22) judged grammatical congruency of classifier-noun combinations in two conditions: graspable object classifier and big object classifier. We found that RTs for the graspable object classifier were significantly faster in congruent combinations, and significantly slower in incongruent combinations, than the big object classifier. There was no main effect on grammatical violations, but rather an interaction effect of classifier type. Thus, we demonstrate here grammatical category-specific effects pertaining to the semantic content and by extension the visual and tactile modality of acquisition underlying the acquisition of these categories. We conclude that abstract grammatical categories are subjected to the same mechanisms as general cognitive and neurophysiological processes and may therefore be grounded. PMID:26379611

  2. "Pre-semantic" cognition revisited: critical differences between semantic aphasia and semantic dementia.

    PubMed

    Jefferies, Elizabeth; Rogers, Timothy T; Hopper, Samantha; Ralph, Matthew A Lambon

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are atypical of the domain and "regularization errors" (irregular/atypical items are produced as if they were domain-typical). The emergence of this pattern across diverse tasks in the same patients indicates that semantic memory plays a key role in all of these types of "pre-semantic" processing. However, this claim remains controversial because semantically impaired patients sometimes fail to show an influence of regularity. This study demonstrates that (a) the location of brain damage and (b) the underlying nature of the semantic deficit affect the likelihood of observing the expected relationship between poor comprehension and regularity effects. We compared the effect of multimodal semantic impairment in the context of semantic dementia and stroke aphasia on the seven "pre-semantic" tasks listed above. In all of these tasks, the semantic aphasia patients were less sensitive to typicality than the semantic dementia patients, even though the two groups obtained comparable scores on semantic tests. The semantic aphasia group also made fewer regularization errors and many more unrelated and perseverative responses. We propose that these group differences reflect the different locus for the semantic impairment in the two conditions: patients with semantic dementia have degraded semantic representations, whereas semantic aphasia patients show deregulated semantic cognition with concomitant executive deficits. These findings suggest a reinterpretation of single-case studies of comprehension-impaired aphasic patients who fail to show the expected effect of regularity on "pre-semantic" tasks. Consequently, such cases do not demonstrate

  3. Quantifying the Level of Congruence between Two Career Measures: An Exploratory Study

    ERIC Educational Resources Information Center

    Miller, Mark J.

    2008-01-01

    This study investigated the level of congruency between scores on the Self-Directed Search--Form R (J. L. Holland, 1994) and scores from an online instrument, both of which measure Holland types (J. L. Holland, 1985, 1997). A reasonably high level of congruence was found. Implications for career counselors are briefly delineated.

  4. Values Congruence: Its Effect on Perceptions of Montana Elementary School Principal Leadership Practices and Student Achievement

    ERIC Educational Resources Information Center

    Zorn, Daniel Roy

    2010-01-01

    The purpose of this quantitative study was to examine the relationship between principal and teacher values congruence and perceived principal leadership practices. Additionally, this study considered the relationship between values congruence, leadership practices, and student achievement. The perceptions teachers hold regarding their principal's…

  5. Different Levels of Learning Interact to Shape the Congruency Sequence Effect

    ERIC Educational Resources Information Center

    Weissman, Daniel H.; Hawks, Zoë W.; Egner, Tobias

    2016-01-01

    The congruency effect in distracter interference tasks is often reduced after incongruent relative to congruent trials. Moreover, this "congruency sequence effect" (CSE) is influenced by learning related to concrete stimulus and response features as well as by learning related to abstract cognitive control processes. There is an ongoing…

  6. Congruence of Real and Ideal Job Characteristics: A Focus on Sex, Parenthood Status, and Extrinsic Characteristics.

    ERIC Educational Resources Information Center

    Weinberg, Sharon L.; Tittle, Carol Kehr

    1987-01-01

    Intrinsic and extrinsic job characteristics were studied in relation to perceived real-ideal job characteristic congruence for a sample of male and female full-time lawyers (N=60). Results indicated that sex differences exist in perceived real-ideal congruence even when variables known to covary with sex in the work setting are controlled.…

  7. Phylogeny congruence analysis and isozyme classification: the pyruvate kinase system.

    PubMed

    Guderley, H; Fournier, P; Auclair, J C

    1989-09-22

    As the isozymes of pyruvate kinase (PK) are best known in rats, the characteristics of the rat isozymes are generally used to classify the PK isozymes in other species. Given the discrepancies generated by this classification by analogy, we evaluated a classification using a phylogeny congruence analysis of the compositional relatedness of vertebrate PK's. While our phylogenetic analysis confirmed the well established separation of the L and R isozymes from the K and M isozymes, its power became most evident in the identification of non-orthologous (or variant) forms of PK. Our analysis emphasized the uniqueness of chicken liver PK which cannot be classified either as a K or an L isozyme, confirmed that tumors express a variety of forms of PK, and indicated that lungs systematically express PK's which are not orthologous with PK's from other tissues. The determination of orthology by the phylogeny congruence analysis assumes that the structural data from different sources are subject to similar methodological error. However, we cannot reject the possibility that an apparent lack of orthology be due to artifacts during purification and analysis. PMID:2615396

  8. Waves and null congruences in a draining bathtub

    NASA Astrophysics Data System (ADS)

    Dempsey, David; Dolan, Sam R.

    2016-04-01

    We study wave propagation in a draining bathtub: a black hole analogue in fluid mechanics whose perturbations are governed by a Klein-Gordon equation on an effective Lorentzian geometry. Like the Kerr spacetime, the draining bathtub geometry possesses an (effective) horizon, an ergosphere and null circular orbits. We propose here that a ‘pulse’ disturbance may be used to map out the light-cone of the effective geometry. First, we apply the eikonal approximation to elucidate the link between wavefronts, null geodesic congruences and the Raychaudhuri equation. Next, we solve the wave equation numerically in the time domain using the method of lines. Starting with Gaussian initial data, we demonstrate that a pulse will propagate along a null congruence and thus trace out the light-cone of the effective geometry. Our new results reveal features, such as wavefront intersections, frame-dragging, winding and interference effects, that are closely associated with the presence of null circular orbits and the ergosphere.

  9. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.

  10. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  11. Our nation's wetlands (video). Audio-Visual

    SciTech Connect

    Not Available

    1990-01-01

    The Department of the Interior is custodian of approximately 500 million acres of federally owned land and has an important role to play in the management of wetlands. To contribute to the President's goal of no net loss of America's remaining wetlands, the Department of the Interior has initiated a 3-point program consisting of wetlands protection, restoration, and research: Wetlands Protection--Reduce wetlands losses on federally owned lands and encourage state and private landholders to practice wetlands conservation; Wetlands Restoration--Increase wetlands gains through the restoration and creation of wetlands on both public and private lands; Wetlands Research--Provide a foundation of scientific knowledge to guide future actions and decisions about wetlands. The audiovisual is a slide/tape-to-video transfer illustrating the various ways Interior bureaus are working to preserve our Nation's wetlands. The tape features an introduction by Secretary Manuel Lujan on the importance of wetlands and recognizing the benefit of such programs as the North American Waterfowl Management Program.

  12. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  13. The Semantic SPASE

    NASA Astrophysics Data System (ADS)

    Hughes, S.; Crichton, D.; Thieman, J.; Ramirez, P.; King, T.; Weiss, M.

    2005-12-01

    The Semantic SPASE (Space Physics Archive Search and Extract) prototype demonstrates the use of semantic web technologies to capture, document, and manage the SPASE data model, support facet- and text-based search, and provide flexible and intuitive user interfaces. The SPASE data model, under development since late 2003 by a consortium of space physics domain experts, is intended to serve as the basis for interoperability between independent data systems. To develop the Semantic SPASE prototype, the data model was first analyzed to determine the inherit object classes and their attributes. These were entered into Stanford Medical Informatics' Protege ontology tool and annotated using definitions from the SPASE documentation. Further analysis of the data model resulted in the addition of class relationships. Finally attributes and relationships that support broad-scope interoperability were added from research associated with the Object-Oriented Data Technology task. To validate the ontology and produce a knowledge base, example data products were ingested. The capture of the data model as an ontology results in a more formal specification of the model. The Protege software is also a powerful management tool and supports plug-ins that produce several graphical notations as output. The stated purpose of the semantic web is to support machine understanding of web-based information. Protege provides an export capability to RDF/XML and RDFS/XML for this purpose. Several research efforts use RDF/XML knowledge bases to provide semantic search. MIT's Simile/Longwell project provides both facet- and text-based search using a suite of metadata browsers and the text-based search engine Lucene. Using the Protege generated RDF knowledge-base a semantic search application was easily built and deployed to run as a web application. Configuration files specify the object attributes and values to be designated as facets (i.e. search) constraints. Semantic web technologies provide

  14. Semantic home video categorization

    NASA Astrophysics Data System (ADS)

    Min, Hyun-Seok; Lee, Young Bok; De Neve, Wesley; Ro, Yong Man

    2009-02-01

    Nowadays, a strong need exists for the efficient organization of an increasing amount of home video content. To create an efficient system for the management of home video content, it is required to categorize home video content in a semantic way. So far, a significant amount of research has already been dedicated to semantic video categorization. However, conventional categorization approaches often rely on unnecessary concepts and complicated algorithms that are not suited in the context of home video categorization. To overcome the aforementioned problem, this paper proposes a novel home video categorization method that adopts semantic home photo categorization. To use home photo categorization in the context of home video, we segment video content into shots and extract key frames that represent each shot. To extract the semantics from key frames, we divide each key frame into ten local regions and extract lowlevel features. Based on the low level features extracted for each local region, we can predict the semantics of a particular key frame. To verify the usefulness of the proposed home video categorization method, experiments were performed with home video sequences, labeled by concepts part of the MPEG-7 VCE2 dataset. To verify the usefulness of the proposed home video categorization method, experiments were performed with 70 home video sequences. For the home video sequences used, the proposed system produced a recall of 77% and an accuracy of 78%.

  15. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    PubMed

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  16. The neural substrates of musical memory revealed by fMRI and two semantic tasks.

    PubMed

    Groussard, M; Rauchs, G; Landeau, B; Viader, F; Desgranges, B; Eustache, F; Platel, H

    2010-12-01

    Recognizing a musical excerpt without necessarily retrieving its title typically reflects the existence of a memory system dedicated to the retrieval of musical knowledge. The functional distinction between musical and verbal semantic memory has seldom been investigated. In this fMRI study, we directly compared the musical and verbal memory of 20 nonmusicians, using a congruence task involving automatic semantic retrieval and a familiarity task requiring more thorough semantic retrieval. In the former, participants had to access their semantic store to retrieve musical or verbal representations of melodies or expressions they heard, in order to decide whether these were then given the right ending or not. In the latter, they had to judge the level of familiarity of musical excerpts and expressions. Both tasks revealed activation of the left inferior frontal and posterior middle temporal cortices, suggesting that executive and selection processes are common to both verbal and musical retrievals. Distinct patterns of activation were observed within the left temporal cortex, with musical material mainly activating the superior temporal gyrus and verbal material the middle and inferior gyri. This cortical organization of musical and verbal semantic representations could explain clinical dissociations featuring selective disturbances for musical or verbal material. PMID:20627131

  17. Semantic Parameters of Split Intransitivity.

    ERIC Educational Resources Information Center

    Van Valin, Jr., Robert D.

    1990-01-01

    This paper argues that split-intransitive phenomena are better explained in semantic terms. A semantic analysis is carried out in Role and Reference Grammar, which assumes the theory of verb classification proposed in Dowty 1979. (49 references) (JL)

  18. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  19. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications... published with grant support and, if feasible, on any publication reporting the results of, or describing, a... under subgrants. (2) Audiovisuals produced as research instruments or for documenting experimentation...

  20. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Institution of Investigation... importation, and the sale within the United States after importation of certain audiovisual components and... certain audiovisual components and products containing the same that infringe one or more of claims 1,...

  1. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and...

  2. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  3. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  4. The semantic priming project.

    PubMed

    Hutchison, Keith A; Balota, David A; Neely, James H; Cortese, Michael J; Cohen-Shikora, Emily R; Tse, Chi-Shing; Yap, Melvin J; Bengson, Jesse J; Niemeyer, Dale; Buchanan, Erin

    2013-12-01

    Speeded naming and lexical decision data for 1,661 target words following related and unrelated primes were collected from 768 subjects across four different universities. These behavioral measures have been integrated with demographic information for each subject and descriptive characteristics for every item. Subjects also completed portions of the Woodcock-Johnson reading battery, three attentional control tasks, and a circadian rhythm measure. These data are available at a user-friendly Internet-based repository ( http://spp.montana.edu ). This Web site includes a search engine designed to generate lists of prime-target pairs with specific characteristics (e.g., length, frequency, associative strength, latent semantic similarity, priming effect in standardized and raw reaction times). We illustrate the types of questions that can be addressed via the Semantic Priming Project. These data represent the largest behavioral database on semantic priming and are available to researchers to aid in selecting stimuli, testing theories, and reducing potential confounds in their studies.

  5. Temporal Representation in Semantic Graphs

    SciTech Connect

    Levandoski, J J; Abdulla, G M

    2007-08-07

    A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

  6. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions

    PubMed Central

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions. PMID:26578938

  7. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions.

    PubMed

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions.

  8. Causal premise semantics.

    PubMed

    Kaufmann, Stefan

    2013-08-01

    The rise of causality and the attendant graph-theoretic modeling tools in the study of counterfactual reasoning has had resounding effects in many areas of cognitive science, but it has thus far not permeated the mainstream in linguistic theory to a comparable degree. In this study I show that a version of the predominant framework for the formal semantic analysis of conditionals, Kratzer-style premise semantics, allows for a straightforward implementation of the crucial ideas and insights of Pearl-style causal networks. I spell out the details of such an implementation, focusing especially on the notions of intervention on a network and backtracking interpretations of counterfactuals.

  9. Semantic Webs and Study Skills.

    ERIC Educational Resources Information Center

    Hoover, John J.; Rabideau, Debra K.

    1995-01-01

    Principles for ensuring effective use of semantic webbing in meeting study skill needs of students with learning problems are noted. Important study skills are listed, along with suggested semantic web topics for which subordinate ideas may be developed. Two semantic webs are presented, illustrating the study skills of multiple choice test-taking…

  10. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  11. Perceived synchrony for realistic and dynamic audiovisual events.

    PubMed

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  12. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  13. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  14. Perceived synchrony for realistic and dynamic audiovisual events.

    PubMed

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  15. Semantator: semantic annotator for converting biomedical text to linked data.

    PubMed

    Tao, Cui; Song, Dezhao; Sharma, Deepak; Chute, Christopher G

    2013-10-01

    More than 80% of biomedical data is embedded in plain text. The unstructured nature of these text-based documents makes it challenging to easily browse and query the data of interest in them. One approach to facilitate browsing and querying biomedical text is to convert the plain text to a linked web of data, i.e., converting data originally in free text to structured formats with defined meta-level semantics. In this paper, we introduce Semantator (Semantic Annotator), a semantic-web-based environment for annotating data of interest in biomedical documents, browsing and querying the annotated data, and interactively refining annotation results if needed. Through Semantator, information of interest can be either annotated manually or semi-automatically using plug-in information extraction tools. The annotated results will be stored in RDF and can be queried using the SPARQL query language. In addition, semantic reasoners can be directly applied to the annotated data for consistency checking and knowledge inference. Semantator has been released online and was used by the biomedical ontology community who provided positive feedbacks. Our evaluation results indicated that (1) Semantator can perform the annotation functionalities as designed; (2) Semantator can be adopted in real applications in clinical and transactional research; and (3) the annotated results using Semantator can be easily used in Semantic-web-based reasoning tools for further inference.

  16. Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing

    PubMed Central

    Hwang, Jaewon

    2015-01-01

    During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614

  17. Semantator: annotating clinical narratives with semantic web ontologies.

    PubMed

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator.

  18. Environmental Attitudes Semantic Differential.

    ERIC Educational Resources Information Center

    Mehne, Paul R.; Goulard, Cary J.

    This booklet is an evaluation instrument which utilizes semantic differential data to assess environmental attitudes. Twelve concepts are included: regulated access to beaches, urban planning, dune vegetation, wetlands, future cities, reclaiming wetlands for building development, city parks, commercial development of beaches, existing cities,…

  19. Assertiveness through Semantics.

    ERIC Educational Resources Information Center

    Zuercher, Nancy T.

    1983-01-01

    Suggests that connotations of assertiveness do not convey all of its meanings, particularly the components of positive feelings, communication, and cooperation. The application of semantics can help restore the balance. Presents a model for differentiating assertive behavior and clarifying the definition. (JAC)

  20. Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Dumais, Susan T.

    2004-01-01

    Presents a literature review that covers the following topics related to Latent Semantic Analysis (LSA): (1) LSA overview; (2) applications of LSA, including information retrieval (IR), information filtering, cross-language retrieval, and other IR-related LSA applications; (3) modeling human memory, including the relationship of LSA to other…

  1. Are Terminologies Semantically Uninteresting?

    ERIC Educational Resources Information Center

    Jacobson, Sven

    Some semanticists have argued that technical vocabulary or terminology is extralinguistic and therefore semantically uninteresting. However, no boundary exists in linguistic reality between terminology and ordinary vocabulary. Rather, terminologies and ordinary language exist on a continuum, and terminology is therefore a legitimate field for…

  2. Semantic Space Analyst

    2004-04-15

    The Semantic Space Analyst (SSA) is software for analyzing a text corpus, discovering relationships among terms, and allowing the user to explore that information in different ways. It includes features for displaying and laying out terms and relationships visually, for generating such maps from manual queries, for discovering differences between corpora. Data can also be exported to Microsoft Excel.

  3. Semantic physical science

    PubMed Central

    2012-01-01

    The articles in this special issue arise from a workshop and symposium held in January 2012 (Semantic Physical Science’). We invited people who shared our vision for the potential of the web to support chemical and related subjects. Other than the initial invitations, we have not exercised any control over the content of the contributed articles. PMID:22856527

  4. Universal Semantics in Translation

    ERIC Educational Resources Information Center

    Wang, Zhenying

    2009-01-01

    What and how we translate are questions often argued about. No matter what kind of answers one may give, priority in translation should be granted to meaning, especially those meanings that exist in all concerned languages. In this paper the author defines them as universal sememes, and the study of them as universal semantics, of which…

  5. Leader-follower value congruence in social responsibility and ethical satisfaction: a polynomial regression analysis.

    PubMed

    Kang, Seung-Wan; Byun, Gukdo; Park, Hun-Joon

    2014-12-01

    This paper presents empirical research into the relationship between leader-follower value congruence in social responsibility and the level of ethical satisfaction for employees in the workplace. 163 dyads were analyzed, each consisting of a team leader and an employee working at a large manufacturing company in South Korea. Following current methodological recommendations for congruence research, polynomial regression and response surface modeling methodologies were used to determine the effects of value congruence. Results indicate that leader-follower value congruence in social responsibility was positively related to the ethical satisfaction of employees. Furthermore, employees' ethical satisfaction was stronger when aligned with a leader with high social responsibility. The theoretical and practical implications are discussed.

  6. Judging emotional congruency: Explicit attention to situational context modulates processing of facial expressions of emotion.

    PubMed

    Diéguez-Risco, Teresa; Aguado, Luis; Albert, Jacobo; Hinojosa, José Antonio

    2015-12-01

    The influence of explicit evaluative processes on the contextual integration of facial expressions of emotion was studied in a procedure that required the participants to judge the congruency of happy and angry faces with preceding sentences describing emotion-inducing situations. Judgments were faster on congruent trials in the case of happy faces and on incongruent trials in the case of angry faces. At the electrophysiological level, a congruency effect was observed in the face-sensitive N170 component that showed larger amplitudes on incongruent trials. An interactive effect of congruency and emotion appeared on the LPP (late positive potential), with larger amplitudes in response to happy faces that followed anger-inducing situations. These results show that the deliberate intention to judge the contextual congruency of facial expressions influences not only processes involved in affective evaluation such as those indexed by the LPP but also earlier processing stages that are involved in face perception.

  7. Unconscious context-specific proportion congruency effect in a stroop-like task.

    PubMed

    Panadero, A; Castellanos, M C; Tudela, P

    2015-01-01

    Cognitive control is a central topic of interest in psychology and cognitive neuroscience and has traditionally been associated with consciousness. However, recent research suggests that cognitive control may be unconscious in character. The main purpose of our study was to further explore this area of research focusing on the possibly unconscious nature of the conflict adaptation effect, specifically the context-specific proportion congruency effect (CSPCE), by using a masked Stroop-like task where the proportion of congruency was associated to various masks. We used electrophysiological measures to analyze the neural correlates of the CSPCE. Results showed evidence of an unconscious CSPCE in reaction times (RTs) and the N2 and P3 components. In addition, the P2 component evoked by both target and masks indicated that the proportion of congruency was processed earlier than the congruency between the color word and the ink color of the target. Taken together, our results provided evidence pointing to an unconscious CSPCE. PMID:25460239

  8. Unconscious context-specific proportion congruency effect in a stroop-like task.

    PubMed

    Panadero, A; Castellanos, M C; Tudela, P

    2015-01-01

    Cognitive control is a central topic of interest in psychology and cognitive neuroscience and has traditionally been associated with consciousness. However, recent research suggests that cognitive control may be unconscious in character. The main purpose of our study was to further explore this area of research focusing on the possibly unconscious nature of the conflict adaptation effect, specifically the context-specific proportion congruency effect (CSPCE), by using a masked Stroop-like task where the proportion of congruency was associated to various masks. We used electrophysiological measures to analyze the neural correlates of the CSPCE. Results showed evidence of an unconscious CSPCE in reaction times (RTs) and the N2 and P3 components. In addition, the P2 component evoked by both target and masks indicated that the proportion of congruency was processed earlier than the congruency between the color word and the ink color of the target. Taken together, our results provided evidence pointing to an unconscious CSPCE.

  9. CLIMCONG: A framework-tool for assessing CLIMate CONGruency

    NASA Astrophysics Data System (ADS)

    Buras, Allan; Kölling, Christian; Menzel, Annette

    2016-04-01

    It is widely accepted that the anticipated elevational and latitudinal shifting of climate forces living organisms (including humans) to track these changes in space over a certain time. Due to the complexity of climate change, prediction of consequent migrations is a difficult procedure afflicted with many uncertainties. To simplify climate complexity and ease respective attempts, various approaches aimed at classifying global climates. For instance, the frequently used Köppen-Geiger climate classification (Köppen, 1900) has been applied to predict the shift of climate zones throughout the 21st century (Rubel and Kottek, 2010). Another - more objective but also more complex - classification approach has recently been presented by Metzger et al. (2013). Though being comprehensive, classifications have certain drawbacks, as I) often focusing on few variables, II) having discrete borders at the margins of classes, and III) subjective selection of an arbitrary number of classes. Ecological theory suggests that when only considering temperature and precipitation (such as Köppen, 1900) particular climate features - e.g. radiation and plant water availability - may not be represented with sufficient precision. Furthermore, sharp boundaries among homogeneous classes do not reflect natural gradients. To overcome the aforementioned drawbacks, we here present CLIMCONG - a framework-tool for assessing climate congruency for quantitatively describing climate similarity through continua in space and time. CLIMCONG allows users to individually select variables for calculation of climate congruency. By this, particular foci can be specified, depending on actual research questions posed towards climate change. For instance, while ecologists focus on a multitude of parameters driving net ecosystem productivity, water managers may only be interested in variables related to drought extremes and water availability. Based on the chosen parameters CLIMCONG determines congruency of

  10. Audiovisual Aids and Techniques in Managerial and Supervisory Training.

    ERIC Educational Resources Information Center

    Rigg, Robinson P.

    An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…

  11. Media Literacy and Audiovisual Languages: A Case Study from Belgium

    ERIC Educational Resources Information Center

    Van Bauwel, Sofie

    2008-01-01

    This article examines the use of media in the construction of a "new" language for children. We studied how children acquire and use media literacy skills through their engagement in an educational art project. This media literacy project is rooted in the realm of audiovisual media, within which children's sound and visual worlds are the focus of…

  12. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    ERIC Educational Resources Information Center

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  13. School Building Design and Audio-Visual Resources.

    ERIC Educational Resources Information Center

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  14. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  15. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  16. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  17. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  18. Selected Audio-Visual Materials for Consumer Education. [New Version.

    ERIC Educational Resources Information Center

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  19. The Role of Audiovisual Mass Media News in Language Learning

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  20. Audiovisual Integration in Noise by Children and Adults

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  1. The Audio-Visual Equipment Directory. Seventeenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    The following types of audiovisual equipment are catalogued: 8 mm. and 16 mm. motion picture projectors, filmstrip and sound filmstrip projectors, slide projectors, random access projection equipment, opaque, overhead, and micro-projectors, record players, special purpose projection equipment, audio tape recorders and players, audio tape…

  2. Audio-Visual Equipment Depreciation. RDU-75-07.

    ERIC Educational Resources Information Center

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  3. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  4. Selected Bibliography and Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This guide to resource materials on environmental education is in two sections: 1) Selected Bibliography of Printed Materials, compiled in April, 1970; and, 2) Audio-Visual materials, Films and Filmstrips, compiled in February, 1971. 99 book annotations are given with an indicator of elementary, junior or senior high school levels. Other book…

  5. Sur Quatre Methodes Audio-Visuelles (On Four Audiovisual Methods)

    ERIC Educational Resources Information Center

    Porquier, Remy; Vives, Robert

    1974-01-01

    This is a critical examination of four audiovisual methods for the teaching of French as a Foreign Language. The methods have as a common basis the interrelationship of image, dialogue, situation, and give grammar priority over vocabulary. (Text is in French.) (AM)

  6. Audiovisual Fundamentals; Basic Equipment Operation and Simple Materials Production.

    ERIC Educational Resources Information Center

    Bullard, John R.; Mether, Calvin E.

    A guide illustrated with simple sketches explains the functions and step-by-step uses of audiovisual (AV) equipment. Principles of projection, audio, AV equipment, lettering, limited-quantity and quantity duplication, and materials preservation are outlined. Apparatus discussed include overhead, opaque, slide-filmstrip, and multiple-loading slide…

  7. An Audio-Visual Lecture Course in Russian Culture

    ERIC Educational Resources Information Center

    Leighton, Lauren G.

    1977-01-01

    An audio-visual course in Russian culture is given at Northern Illinois University. A collection of 4-5,000 color slides is the basis for the course, with lectures focussed on literature, philosophy, religion, politics, art and crafts. Acquisition, classification, storage and presentation of slides, and organization of lectures are discussed. (CHK)

  8. Searching AVLINE for Curriculum-Related Audiovisual Instructional Materials.

    ERIC Educational Resources Information Center

    Bridgman, Charles F.; Suter, Emanuel

    1979-01-01

    Ways in which the National Library of Medicine's online data file of audiovisual instructional materials (AVLINE) can be searched are described. The search approaches were developed with the assistance of data analysts at NLM trained in reference services. AVLINE design, search strategies, and acquisition of the materials are reported. (LBH)

  9. Guide to Audiovisual Terminology. Product Information Supplement, Number 6.

    ERIC Educational Resources Information Center

    Trzebiatowski, Gregory, Ed.

    1968-01-01

    The terms appearing in this glossary have been specifically selected for use by educators from a larger text, which was prepared by the Commission on Definition and Terminology of the Department of Audiovisual Instruction of the National Education Association. Specialized areas covered in the glossary include audio reproduction, audiovisual…

  10. Can conceptual congruency effects between number, time, and space be accounted for by polarity correspondence?

    PubMed

    Santiago, Julio; Lakens, Daniël

    2015-03-01

    Conceptual congruency effects have been interpreted as evidence for the idea that the representations of abstract conceptual dimensions (e.g., power, affective valence, time, number, importance) rest on more concrete dimensions (e.g., space, brightness, weight). However, an alternative theoretical explanation based on the notion of polarity correspondence has recently received empirical support in the domains of valence and morality, which are related to vertical space (e.g., good things are up). In the present study we provide empirical arguments against the applicability of the polarity correspondence account to congruency effects in two conceptual domains related to lateral space: number and time. Following earlier research, we varied the polarity of the response dimension (left-right) by manipulating keyboard eccentricity. In a first experiment we successfully replicated the congruency effect between vertical and lateral space and its interaction with response eccentricity. We then examined whether this modulation of a concrete-concrete congruency effect can be extended to two types of concrete-abstract effects, those between left-right space and number (in both parity and magnitude judgment tasks), and temporal reference. In all three tasks response eccentricity failed to modulate the congruency effects. We conclude that polarity correspondence does not provide an adequate explanation of conceptual congruency effects in the domains of number and time.

  11. The effect of visual apparent motion on audiovisual simultaneity.

    PubMed

    Kwon, Jinhwan; Ogawa, Ken-ichiro; Miyake, Yoshihiro

    2014-01-01

    Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal

  12. Context-specific effects of musical expertise on audiovisual integration.

    PubMed

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  13. Context-specific effects of musical expertise on audiovisual integration.

    PubMed

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  14. Context-specific effects of musical expertise on audiovisual integration

    PubMed Central

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  15. Semantic interpretation of nominalizations

    SciTech Connect

    Hull, R.D.; Gomez, F.

    1996-12-31

    A computational approach to the semantic interpretation of nominalizations is described. Interpretation of normalizations involves three tasks: deciding whether the normalization is being used in a verbal or non-verbal sense; disambiguating the normalized verb when a verbal sense is used; and determining the fillers of the thematic roles of the verbal concept or predicate of the nominalization. A verbal sense can be recognized by the presence of modifiers that represent the arguments of the verbal concept. It is these same modifiers which provide the semantic clues to disambiguate the normalized verb. In the absence of explicit modifiers, heuristics are used to discriminate between verbal and non-verbal senses. A correspondence between verbs and their nominalizations is exploited so that only a small amount of additional knowledge is needed to handle the nominal form. These methods are tested in the domain of encyclopedic texts and the results are shown.

  16. Living With Semantic Dementia

    PubMed Central

    Sage, Karen; Wilkinson, Ray; Keady, John

    2014-01-01

    Semantic dementia is a variant of frontotemporal dementia and is a recently recognized diagnostic condition. There has been some research quantitatively examining care partner stress and burden in frontotemporal dementia. There are, however, few studies exploring the subjective experiences of family members caring for those with frontotemporal dementia. Increased knowledge of such experiences would allow service providers to tailor intervention, support, and information better. We used a case study design, with thematic narrative analysis applied to interview data, to describe the experiences of a wife and son caring for a husband/father with semantic dementia. Using this approach, we identified four themes: (a) living with routines, (b) policing and protecting, (c) making connections, and (d) being adaptive and flexible. Each of these themes were shared and extended, with the importance of routines in everyday life highlighted. The implications for policy, practice, and research are discussed. PMID:24532121

  17. Practical Semantic Astronomy

    NASA Astrophysics Data System (ADS)

    Graham, Matthew; Gray, N.; Burke, D.

    2010-01-01

    Many activities in the era of data-intensive astronomy are predicated upon some transference of domain knowledge and expertise from human to machine. The semantic infrastructure required to support this is no longer a pipe dream of computer science but a set of practical engineering challenges, more concerned with deployment and performance details than AI abstractions. The application of such ideas promises to help in such areas as contextual data access, exploiting distributed annotation and heterogeneous sources, and intelligent data dissemination and discovery. In this talk, we will review the status and use of semantic technologies in astronomy, particularly to address current problems in astroinformatics, with such projects as SKUA and AstroCollation.

  18. Live Social Semantics

    NASA Astrophysics Data System (ADS)

    Alani, Harith; Szomszor, Martin; Cattuto, Ciro; van den Broeck, Wouter; Correndo, Gianluca; Barrat, Alain

    Social interactions are one of the key factors to the success of conferences and similar community gatherings. This paper describes a novel application that integrates data from the semantic web, online social networks, and a real-world contact sensing platform. This application was successfully deployed at ESWC09, and actively used by 139 people. Personal profiles of the participants were automatically generated using several Web 2.0 systems and semantic academic data sources, and integrated in real-time with face-to-face contact networks derived from wearable sensors. Integration of all these heterogeneous data layers made it possible to offer various services to conference attendees to enhance their social experience such as visualisation of contact data, and a site to explore and connect with other participants. This paper describes the architecture of the application, the services we provided, and the results we achieved in this deployment.

  19. Audio/Visual Aids: A Study of the Effect of Audio/Visual Aids on the Comprehension Recall of Students.

    ERIC Educational Resources Information Center

    Bavaro, Sandra

    A study investigated whether the use of audio/visual aids had an effect upon comprehension recall. Thirty fourth-grade students from an urban public school were randomly divided into two equal samples of 15. One group was given a story to read (print only), while the other group viewed a filmstrip of the same story, thereby utilizing audio/visual…

  20. Congruency of body-related information induces somatosensory reorganization.

    PubMed

    Cardini, Flavia; Longo, Matthew R

    2016-04-01

    Chronic pain and impaired tactile sensitivity are frequently associated with "blurred" representations in the somatosensory cortex. The factors that produce such somatosensory blurring, however, remain poorly understood. We manipulated visuo-tactile congruence to investigate its role in promoting somatosensory reorganization. To this aim we used the mirror box illusion that produced in participants the subjective feeling of looking directly at their left hand, though they were seeing the reflection of their right hand. Simultaneous touches were applied to the middle or ring finger of each hand. In one session, the same fingers were touched (for example both middle fingers), producing a congruent percept; in the other session different fingers were touched, producing an incongruent percept. In the somatosensory system, suppressive interactions between adjacent stimuli are an index of intracortical inhibitory function. After each congruent and incongruent session, we recorded somatosensory evoked potential (SEPs) elicited by electrocutaneous stimulation of the left ring and middle fingers, either individually or simultaneously. A somatosensory suppression index (SSI) was calculated as the difference in amplitude between the sum of potentials evoked by the two individually stimulated fingers and the potentials evoked by simultaneous stimulation of both fingers. This SSI can be taken as an index of the strength of inhibitory interactions and consequently can provide a measure of how distinct the representations of the two fingers are. Results showed stronger SSI in the P100 component after congruent than incongruent stimulation, suggesting the key role of congruent sensory information about the body in inducing somatosensory reorganization. PMID:26902158

  1. Infants' sensitivity to the congruence of others' emotions and actions.

    PubMed

    Hepach, Robert; Westermann, Gert

    2013-05-01

    As humans, we are attuned to the moods and emotions of others. This understanding of emotions enables us to interpret other people's actions on the basis of their emotional displays. However, the development of this capacity is not well understood. Here we show a developmental pattern in 10- and 14-month-old infants' sensitivity to others' emotions and actions. Infants were shown video clips in which happy or angry actors performed a positive action (patting a toy tiger) or a negative action (thumping the toy tiger). Only 14-month-olds, but not 10-month-olds, showed selectively greater sympathetic activity (i.e., pupil dilation) both when an angry actor performed the positive action and when a happy actor performed the negative action, in contrast to the actors performing the actions congruent with their displayed emotions. These results suggest that at the beginning of the second year of life, infants become sensitive to the congruence of other people's emotions and actions, indicating an emerging abstract concept of emotions during infancy. The results are discussed in light of previous research on emotion understanding during infancy. PMID:23454359

  2. Complex Semantic Networks

    NASA Astrophysics Data System (ADS)

    Teixeira, G. M.; Aguiar, M. S. F.; Carvalho, C. F.; Dantas, D. R.; Cunha, M. V.; Morais, J. H. M.; Pereira, H. B. B.; Miranda, J. G. V.

    Verbal language is a dynamic mental process. Ideas emerge by means of the selection of words from subjective and individual characteristics throughout the oral discourse. The goal of this work is to characterize the complex network of word associations that emerge from an oral discourse from a discourse topic. Because of that, concepts of associative incidence and fidelity have been elaborated and represented the probability of occurrence of pairs of words in the same sentence in the whole oral discourse. Semantic network of words associations were constructed, where the words are represented as nodes and the edges are created when the incidence-fidelity index between pairs of words exceeds a numerical limit (0.001). Twelve oral discourses were studied. The networks generated from these oral discourses present a typical behavior of complex networks and their indices were calculated and their topologies characterized. The indices of these networks obtained from each incidence-fidelity limit exhibit a critical value in which the semantic network has maximum conceptual information and minimum residual associations. Semantic networks generated by this incidence-fidelity limit depict a pattern of hierarchical classes that represent the different contexts used in the oral discourse.

  3. Early Stages of Sensory Processing, but Not Semantic Integration, Are Altered in Dyslexic Adults

    PubMed Central

    Silva, Patrícia B.; Ueki, Karen; Oliveira, Darlene G.; Boggio, Paulo S.; Macedo, Elizeu C.

    2016-01-01

    The aim of this study was to verify which stages of language processing are impaired in individuals with dyslexia. For this, a visual-auditory crossmodal task with semantic judgment was used. The P100 potentials were chosen, related to visual processing and initial integration, and N400 potentials related to semantic processing. Based on visual-auditory crossmodal studies, it is understood that dyslexic individuals present impairments in the integration of these two types of tasks and impairments in processing spoken and musical auditory information. The present study sought to investigate and compare the performance of 32 adult participants (14 individuals with dyslexia), in semantic processing tasks in two situations with auditory stimuli: sentences and music, with integrated visual stimuli (pictures). From the analysis of the accuracy, both the sentence and the music blocks showed significant effects on the congruency variable, with both groups having higher scores for the incongruent items than for the congruent ones. Furthermore, there was also a group effect when the priming was music, with the dyslexic group showing an inferior performance to the control group, demonstrating greater impairments in processing when the priming was music. Regarding the reaction time variable, a group effect in music and sentence priming was found, with the dyslexic group being slower than the control group. The N400 and P100 components were analyzed. In items with judgment and music priming, a group effect was observed for the amplitude of the P100, with higher means produced by individuals with dyslexia, corroborating the literature that individuals with dyslexia have difficulties in early information processing. A congruency effect was observed in the items with music priming, with greater P100 amplitudes found in incongruous situations. Analyses of the N400 component showed the congruency effect for amplitude in both types of priming, with the mean amplitude for incongruent

  4. Early Stages of Sensory Processing, but Not Semantic Integration, Are Altered in Dyslexic Adults.

    PubMed

    Silva, Patrícia B; Ueki, Karen; Oliveira, Darlene G; Boggio, Paulo S; Macedo, Elizeu C

    2016-01-01

    The aim of this study was to verify which stages of language processing are impaired in individuals with dyslexia. For this, a visual-auditory crossmodal task with semantic judgment was used. The P100 potentials were chosen, related to visual processing and initial integration, and N400 potentials related to semantic processing. Based on visual-auditory crossmodal studies, it is understood that dyslexic individuals present impairments in the integration of these two types of tasks and impairments in processing spoken and musical auditory information. The present study sought to investigate and compare the performance of 32 adult participants (14 individuals with dyslexia), in semantic processing tasks in two situations with auditory stimuli: sentences and music, with integrated visual stimuli (pictures). From the analysis of the accuracy, both the sentence and the music blocks showed significant effects on the congruency variable, with both groups having higher scores for the incongruent items than for the congruent ones. Furthermore, there was also a group effect when the priming was music, with the dyslexic group showing an inferior performance to the control group, demonstrating greater impairments in processing when the priming was music. Regarding the reaction time variable, a group effect in music and sentence priming was found, with the dyslexic group being slower than the control group. The N400 and P100 components were analyzed. In items with judgment and music priming, a group effect was observed for the amplitude of the P100, with higher means produced by individuals with dyslexia, corroborating the literature that individuals with dyslexia have difficulties in early information processing. A congruency effect was observed in the items with music priming, with greater P100 amplitudes found in incongruous situations. Analyses of the N400 component showed the congruency effect for amplitude in both types of priming, with the mean amplitude for incongruent

  5. Personal semantics: at the crossroads of semantic and episodic memory.

    PubMed

    Renoult, Louis; Davidson, Patrick S R; Palombo, Daniela J; Moscovitch, Morris; Levine, Brian

    2012-11-01

    Declarative memory is usually described as consisting of two systems: semantic and episodic memory. Between these two poles, however, may lie a third entity: personal semantics (PS). PS concerns knowledge of one's past. Although typically assumed to be an aspect of semantic memory, it is essentially absent from existing models of knowledge. Furthermore, like episodic memory (EM), PS is idiosyncratically personal (i.e., not culturally-shared). We show that, depending on how it is operationalized, the neural correlates of PS can look more similar to semantic memory, more similar to EM, or dissimilar to both. We consider three different perspectives to better integrate PS into existing models of declarative memory and suggest experimental strategies for disentangling PS from semantic and episodic memory.

  6. Audio-visual interactions in product sound design

    NASA Astrophysics Data System (ADS)

    Özcan, Elif; van Egmond, René

    2010-02-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral part of the main product concept. Because visual aspects of a product are considered to dominate the communication of the desired product concept, sound is usually expected to fit the visual character of a product. We argue that this can be accomplished successfully only on basis of a thorough understanding of the impact of audio-visual interactions on product sounds. Two experimental studies are reviewed to show audio-visual interactions on both perceptual and cognitive levels influencing the way people encode, recall, and attribute meaning to product sounds. Implications for sound design are discussed defying the natural tendency of product designers to analyze the "sound problem" in isolation from the other product properties.

  7. Audio-visual communication and its use in palliative care.

    PubMed

    Coyle, Nessa; Khojainova, Natalia; Francavilla, John M; Gonzales, Gilbert R

    2002-02-01

    The technology of telemedicine has been used for over 20 years, involving different areas of medicine, providing medical care for the geographically isolated patients, and uniting geographically isolated clinicians. Today audio-visual technology may be useful in palliative care for the patients lacking access to medical services due to the medical condition rather than geographic isolation. We report results of a three-month trial of using audio-visual communications as a complementary tool in care for a complex palliative care patient. Benefits of this system to the patient included 1) a daily limited physical examination, 2) screening for a need for a clinical visit or admission, 3) lip reading by the deaf patient, 4) satisfaction by the patient and the caregivers with this form of communication as a complement to telephone communication. A brief overview of the historical prospective on telemedicine and a listing of applied telemedicine programs are provided.

  8. Contextual Congruency Effect in Natural Scene Categorization: Different Strategies in Humans and Monkeys (Macaca mulatta)

    PubMed Central

    Collet, Anne-Claire; Fize, Denis; VanRullen, Rufin

    2015-01-01

    Rapid visual categorization is a crucial ability for survival of many animal species, including monkeys and humans. In real conditions, objects (either animate or inanimate) are never isolated but embedded in a complex background made of multiple elements. It has been shown in humans and monkeys that the contextual background can either enhance or impair object categorization, depending on context/object congruency (for example, an animal in a natural vs. man-made environment). Moreover, a scene is not only a collection of objects; it also has global physical features (i.e phase and amplitude of Fourier spatial frequencies) which help define its gist. In our experiment, we aimed to explore and compare the contribution of the amplitude spectrum of scenes in the context-object congruency effect in monkeys and humans. We designed a rapid visual categorization task, Animal versus Non-Animal, using as contexts both real scenes photographs and noisy backgrounds built from the amplitude spectrum of real scenes but with randomized phase spectrum. We showed that even if the contextual congruency effect was comparable in both species when the context was a real scene, it differed when the foreground object was surrounded by a noisy background: in monkeys we found a similar congruency effect in both conditions, but in humans the congruency effect was absent (or even reversed) when the context was a noisy background. PMID:26207915

  9. Effects of working memory span on processing of lexical associations and congruence in spoken discourse.

    PubMed

    Boudewyn, Megan A; Long, Debra L; Swaab, Tamara Y

    2013-01-01

    The goal of this study was to determine whether variability in working memory (WM) capacity and cognitive control affects the processing of global discourse congruence and local associations among words when participants listened to short discourse passages. The final, critical word of each passage was either associated or unassociated with a preceding prime word (e.g., "He was not prepared for the fame and fortune/praise"). These critical words were also either congruent or incongruent with respect to the preceding discourse context [e.g., a context in which a prestigious prize was won (congruent) or in which the protagonist had been arrested (incongruent)]. We used multiple regression to assess the unique contribution of suppression ability (our measure of cognitive control) and WM capacity on the amplitude of individual N400 effects of congruence and association. Our measure of suppression ability did not predict the size of the N400 effects of association or congruence. However, as expected, the results showed that high WM capacity individuals were less sensitive to the presence of lexical associations (showed smaller N400 association effects). Furthermore, differences in WM capacity were related to differences in the topographic distribution of the N400 effects of discourse congruence. The topographic differences in the global congruence effects indicate differences in the underlying neural generators of the N400 effects, as a function of WM. This suggests additional, or at a minimum, distinct, processing on the part of higher capacity individuals when tasked with integrating incoming words into the developing discourse representation.

  10. Role of audiovisual synchrony in driving head orienting responses.

    PubMed

    Ho, Cristy; Gray, Rob; Spence, Charles

    2013-06-01

    Many studies now suggest that optimal multisensory integration sometimes occurs under conditions where auditory and visual stimuli are presented asynchronously (i.e. at asynchronies of 100 ms or more). Such observations lead to the suggestion that participants' speeded orienting responses might be enhanced following the presentation of asynchronous (as compared to synchronous) peripheral audiovisual spatial cues. Here, we report a series of three experiments designed to investigate this issue. Upon establishing the effectiveness of bimodal cuing over the best of its unimodal components (Experiment 1), participants had to make speeded head-turning or steering (wheel-turning) responses toward the cued direction (Experiment 2), or an incompatible response away from the cue (Experiment 3), in response to random peripheral audiovisual stimuli presented at stimulus onset asynchronies ranging from -100 to 100 ms. Race model inequality analysis of the results (Experiment 1) revealed different mechanisms underlying the observed multisensory facilitation of participants' head-turning versus steering responses. In Experiments 2 and 3, the synchronous presentation of the component auditory and visual cues gave rise to the largest facilitation of participants' response latencies. Intriguingly, when the participants had to subjectively judge the simultaneity of the audiovisual stimuli, the point of subjective simultaneity occurred when the auditory stimulus lagged behind the visual stimulus by 22 ms. Taken together, these results appear to suggest that the maximally beneficial behavioural (head and manual) orienting responses resulting from peripherally presented audiovisual stimuli occur when the component signals are presented in synchrony. These findings suggest that while the brain uses precise temporal synchrony in order to control its orienting responses, the system that the human brain uses to consciously judge synchrony appears to be less fine tuned.

  11. Neural development of networks for audiovisual speech comprehension

    PubMed Central

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2009-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the neurobiological substrate in the child compares to the adult is unknown. In particular, developmental differences in the network for audiovisual speech comprehension could manifest though the incorporation of additional brain regions, or through different patterns of effective connectivity. In the present study we used functional magnetic resonance imaging and structural equation modeling (SEM) to characterize the developmental changes in network interactions for audiovisual speech comprehension. The brain response was recorded while children 8- to 11-years-old and adults passively listened to stories under audiovisual (AV) and auditory-only (A) conditions. Results showed that in children and adults, AV comprehension activated the same fronto-temporo-parietal network of regions known for their contribution to speech production and perception. However, the SEM network analysis revealed age-related differences in the functional interactions among these regions. In particular, the influence of the posterior inferior frontal gyrus/ventral premotor cortex on supramarginal gyrus differed across age groups during AV, but not A speech. This functional pathway might be important for relating motor and sensory information used by the listener to identify speech sounds. Further, its development might reflect changes in the mechanisms that relate visual speech information to articulatory speech representations through experience producing and perceiving speech. PMID:19781755

  12. Semantic Roles and Grammatical Relations.

    ERIC Educational Resources Information Center

    Van Valin, Robert D., Jr.

    The nature of semantic roles and grammatical relations are explored from the perspective of Role and Reference Grammar (RRG). It is proposed that unraveling the relational aspects of grammar involves the recognition that semantic roles fall into two types, thematic relations and macroroles, and that grammatical relations are not universal and are…

  13. Indexing by Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Deerwester, Scott; And Others

    1990-01-01

    Describes a new method for automatic indexing and retrieval called latent semantic indexing (LSI). Problems with matching query words with document words in term-based information retrieval systems are discussed, semantic structure is examined, singular value decomposition (SVD) is explained, and the mathematics underlying the SVD model is…

  14. Semantic Tools in Information Retrieval.

    ERIC Educational Resources Information Center

    Rubinoff, Morris; Stone, Don C.

    This report discusses the problem of the meansings of words used in information retrieval systems, and shows how semantic tools can aid in the communication which takes place between indexers and searchers via index terms. After treating the differing use of semantic tools in different types of systems, two tools (classification tables and…

  15. Semantic Focus and Sentence Comprehension.

    ERIC Educational Resources Information Center

    Cutler, Anne; Fodor, Jerry A.

    1979-01-01

    Reaction time to detect a phoneme target in a sentence was faster when the target-containing word formed part of the semantic focus of the sentence. Sentence understanding was facilitated by rapid identification of focused information. Active search for accented words can be interpreted as a search for semantic focus. (Author/RD)

  16. Semantic Feature Distinctiveness and Frequency

    ERIC Educational Resources Information Center

    Lamb, Katherine M.

    2012-01-01

    Lexical access is the process in which basic components of meaning in language, the lexical entries (words) are activated. This activation is based on the organization and representational structure of the lexical entries. Semantic features of words, which are the prominent semantic characteristics of a word concept, provide important information…

  17. The semantic planetary data system

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel; Kelly, Sean; Mattmann, Chris

    2005-01-01

    This paper will provide a brief overview of the PDS data model and the PDS catalog. It will then describe the implentation of the Semantic PDS including the development of the formal ontology, the generation of RDFS/XML and RDF/XML data sets, and the buiding of the semantic search application.

  18. Semantic Analysis in Machine Translation.

    ERIC Educational Resources Information Center

    Skorokhodko, E. F.

    1970-01-01

    In many cases machine-translation does not produce satisfactory results within the framework of purely formal (morphological and syntaxic) analysis, particularly, in the case of syntaxic and lexical homonomy. An algorithm for syntaxic-semantic analysis is proposed, and its principles of operation are described. The syntaxico-semantic structure is…

  19. Semantic Processing of Mathematical Gestures

    ERIC Educational Resources Information Center

    Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.

    2009-01-01

    Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…

  20. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    NASA Astrophysics Data System (ADS)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  1. Audiovisual integration of speech in a patient with Broca's Aphasia

    PubMed Central

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  2. Multi-pose lipreading and audio-visual speech recognition

    NASA Astrophysics Data System (ADS)

    Estellers, Virginia; Thiran, Jean-Philippe

    2012-12-01

    In this article, we study the adaptation of visual and audio-visual speech recognition systems to non-ideal visual conditions. We focus on overcoming the effects of a changing pose of the speaker, a problem encountered in natural situations where the speaker moves freely and does not keep a frontal pose with relation to the camera. To handle these situations, we introduce a pose normalization block in a standard system and generate virtual frontal views from non-frontal images. The proposed method is inspired by pose-invariant face recognition and relies on linear regression to find an approximate mapping between images from different poses. We integrate the proposed pose normalization block at different stages of the speech recognition system and quantify the loss of performance related to pose changes and pose normalization techniques. In audio-visual experiments we also analyze the integration of the audio and visual streams. We show that an audio-visual system should account for non-frontal poses and normalization techniques in terms of the weight assigned to the visual stream in the classifier.

  3. Audiovisual integration of speech in a patient with Broca's Aphasia.

    PubMed

    Andersen, Tobias S; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  4. No rapid audiovisual recalibration in adults on the autism spectrum.

    PubMed

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  5. No rapid audiovisual recalibration in adults on the autism spectrum

    PubMed Central

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  6. Audiovisual Delay as a Novel Cue to Visual Distance

    PubMed Central

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R.; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance. PMID:26509795

  7. Audiovisual temporal fusion in 6-month-old infants.

    PubMed

    Kopp, Franziska

    2014-07-01

    The aim of this study was to investigate neural dynamics of audiovisual temporal fusion processes in 6-month-old infants using event-related brain potentials (ERPs). In a habituation-test paradigm, infants did not show any behavioral signs of discrimination of an audiovisual asynchrony of 200 ms, indicating perceptual fusion. In a subsequent EEG experiment, audiovisual synchronous stimuli and stimuli with a visual delay of 200 ms were presented in random order. In contrast to the behavioral data, brain activity differed significantly between the two conditions. Critically, N1 and P2 latency delays were not observed between synchronous and fused items, contrary to previously observed N1 and P2 latency delays between synchrony and perceived asynchrony. Hence, temporal interaction processes in the infant brain between the two sensory modalities varied as a function of perceptual fusion versus asynchrony perception. The visual recognition components Pb and Nc were modulated prior to sound onset, emphasizing the importance of anticipatory visual events for the prediction of auditory signals. Results suggest mechanisms by which young infants predictively adjust their ongoing neural activity to the temporal synchrony relations to be expected between vision and audition.

  8. The development of the perception of audiovisual simultaneity.

    PubMed

    Chen, Yi-Chuan; Shore, David I; Lewis, Terri L; Maurer, Daphne

    2016-06-01

    We measured the typical developmental trajectory of the window of audiovisual simultaneity by testing four age groups of children (5, 7, 9, and 11 years) and adults. We presented a visual flash and an auditory noise burst at various stimulus onset asynchronies (SOAs) and asked participants to report whether the two stimuli were presented at the same time. Compared with adults, children aged 5 and 7 years made more simultaneous responses when the SOAs were beyond ± 200 ms but made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity was located at the visual-leading side, as in adults, by 5 years of age, the youngest age tested. However, the window of audiovisual simultaneity became narrower and response errors decreased with age, reaching adult levels by 9 years of age. Experiment 2 ruled out the possibility that the adult-like performance of 9-year-old children was caused by the testing of a wide range of SOAs. Together, the results demonstrate that the adult-like precision of perceiving audiovisual simultaneity is developed by 9 years of age, the youngest age that has been reported to date.

  9. Audiovisual integration of speech in a patient with Broca's Aphasia.

    PubMed

    Andersen, Tobias S; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia.

  10. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  11. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    PubMed

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.

  12. Hierarchical abstract semantic model for image classification

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    Semantic gap limits the performance of bag-of-visual-words. To deal with this problem, a hierarchical abstract semantics method that builds abstract semantic layers, generates semantic visual vocabularies, measures semantic gap, and constructs classifiers using the Adaboost strategy is proposed. First, abstract semantic layers are proposed to narrow the semantic gap between visual features and their interpretation. Then semantic visual words are extracted as features to train semantic classifiers. One popular form of measurement is used to quantify the semantic gap. The Adaboost training strategy is used to combine weak classifiers into strong ones to further improve performance. For a testing image, the category is estimated layer-by-layer. Corresponding abstract hierarchical structures for popular datasets, including Caltech-101 and MSRC, are proposed for evaluation. The experimental results show that the proposed method is capable of narrowing semantic gaps effectively and performs better than other categorization methods.

  13. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304272

  14. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

  15. "Pre-Semantic" Cognition Revisited: Critical Differences between Semantic Aphasia and Semantic Dementia

    ERIC Educational Resources Information Center

    Jefferies, Elizabeth; Rogers, Timothy T.; Hopper, Samantha; Lambon Ralph, Matthew A.

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are…

  16. The enigma of social support and occupational stress: source congruence and gender role effects.

    PubMed

    Beehr, Terry A; Farmer, Suzanne J; Glazer, Sharon; Gudanowski, David M; Nair, Vandana Nadig

    2003-07-01

    Research on the potential ameliorating effects of social support on occupational stress produces weak, inconsistent, and even contradictory results. This study of 117 employees, mostly from a southern U.S. hospital supply company, examined potential moderators that were theorized might reduce the confusion: source congruence (congruence between sources of the stressor and of social support) and gender role. Congruence between the sources of stressors and of social support appeared to make little difference in determining the moderating or buffering effect of social support on the relationship between stressors and strain. Gender role, however, may moderate the relationship between social support and individual stains such that more feminine people react more strongly and positively to social support than more masculine people do.

  17. Reducing Stereotype Threat With Embodied Triggers: A Case of Sensorimotor-Mental Congruence.

    PubMed

    Chalabaev, Aïna; Radel, Rémi; Masicampo, E J; Dru, Vincent

    2016-08-01

    In four experiments, we tested whether embodied triggers may reduce stereotype threat. We predicted that left-side sensorimotor inductions would increase cognitive performance under stereotype threat, because such inductions are linked to avoidance motivation among right-handers. This sensorimotor-mental congruence hypothesis rests on regulatory fit research showing that stereotype threat may be reduced by avoidance-oriented interventions, and motor congruence research showing positive effects when two parameters of a motor action activate the same motivational system (avoidance or approach). Results indicated that under stereotype threat, cognitive performance was higher when participants contracted their left hand (Study 1) or when the stimuli were presented on the left side of the visual field (Studies 2-4), as compared with right-hand contraction or right-side visual stimulation. These results were observed on math (Studies 1, 2, and 4) and Stroop (Study 3) performance. An indirect effect of congruence on math performance through subjective fluency was also observed.

  18. Culture, salience, and psychiatric diagnosis: exploring the concept of cultural congruence & its practical application

    PubMed Central

    2013-01-01

    Introduction Cultural congruence is the idea that to the extent a belief or experience is culturally shared it is not to feature in a diagnostic judgement, irrespective of its resemblance to psychiatric pathology. This rests on the argument that since deviation from norms is central to diagnosis, and since what counts as deviation is relative to context, assessing the degree of fit between mental states and cultural norms is crucial. Various problems beset the cultural congruence construct including impoverished definitions of culture as religious, national or ethnic group and of congruence as validation by that group. This article attempts to address these shortcomings to arrive at a cogent construct. Results The article distinguishes symbolic from phenomenological conceptions of culture, the latter expanded upon through two sources: Husserl’s phenomenological analysis of background intentionality and neuropsychological literature on salience. It is argued that culture is not limited to symbolic presuppositions and shapes subjects’ experiential dispositions. This conception is deployed to re-examine the meaning of (in)congruence. The main argument is that a significant, since foundational, deviation from culture is not from a value or belief but from culturally-instilled experiential dispositions, in what is salient to an individual in a particular context. Conclusion Applying the concept of cultural congruence must not be limited to assessing violations of the symbolic order and must consider alignment with or deviations from culturally-instilled experiential dispositions. By virtue of being foundational to a shared experience of the world, such dispositions are more accurate indicators of potential vulnerability. Notwithstanding problems of access and expertise, clinical practice should aim to accommodate this richer meaning of cultural congruence. PMID:23870676

  19. Moderation of the Relation between Person-Environment Congruence and Academic Success: Environmental Constraint, Personal Flexibility and Method

    ERIC Educational Resources Information Center

    Tracey, Terence J. G.; Allen, Jeff; Robbins, Steven B.

    2012-01-01

    The relation of interest-major congruence to indicators of college success was examined as it was moderated by environmental constraint, individual flexibility, and congruence definition in an initial sample of 88,813 undergraduates (38,787 men and 50,026 women) from 42 different colleges and universities in 16 states. College achievement (GPA…

  20. Congruence or Discrepancy? Comparing Patients' Health Valuations and Physicians' Treatment Goals for Rehabilitation for Patients with Chronic Conditions

    ERIC Educational Resources Information Center

    Nagl, Michaela; Farin, Erik

    2012-01-01

    The aim of this study was to test the congruence of patients' health valuations and physicians' treatment goals for the rehabilitation of chronically ill patients. In addition, patient characteristics associated with greater or less congruence were to be determined. In a questionnaire study, patients' health valuations and physicians' goals were…

  1. Measuring Transgender Individuals' Comfort with Gender Identity and Appearance: Development and Validation of the Transgender Congruence Scale

    ERIC Educational Resources Information Center

    Kozee, Holly B.; Tylka, Tracy L.; Bauerband, L. Andrew

    2012-01-01

    Our study used the construct of congruence to conceptualize the degree to which transgender individuals feel genuine, authentic, and comfortable with their gender identity and external appearance. In Study 1, the Transgender Congruence scale (TCS) was developed, and data from 162 transgender individuals were used to estimate the reliability and…

  2. Why It Is Too Early to Lose Control in Accounts of Item-Specific Proportion Congruency Effects

    ERIC Educational Resources Information Center

    Bugg, Julie M.; Jacoby, Larry L.; Chanani, Swati

    2011-01-01

    The item-specific proportion congruency (ISPC) effect is the finding of attenuated interference for mostly incongruent as compared to mostly congruent items. A debate in the Stroop literature concerns the mechanisms underlying this effect. Noting a confound between proportion congruency and contingency, Schmidt and Besner (2008) suggested that…

  3. N=2 gauge theories: Congruence subgroups, coset graphs, and modular surfaces

    NASA Astrophysics Data System (ADS)

    He, Yang-Hui; McKay, John

    2013-01-01

    We establish a correspondence between generalized quiver gauge theories in four dimensions and congruence subgroups of the modular group, hinging upon the trivalent graphs, which arise in both. The gauge theories and the graphs are enumerated and their numbers are compared. The correspondence is particularly striking for genus zero torsion-free congruence subgroups as exemplified by those which arise in Moonshine. We analyze in detail the case of index 24, where modular elliptic K3 surfaces emerge: here, the elliptic j-invariants can be recast as dessins d'enfant, which dictate the Seiberg-Witten curves.

  4. The Semantic Distance Model of Relevance Assessment.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1998-01-01

    Presents the Semantic Distance Model (SDM) of Relevance Assessment, a cognitive model of the relationship between semantic distance and relevance assessment. Discusses premises of the model such as the subjective nature of information and the metaphor of semantic distance. Empirical results illustrate the effects of semantic distance and semantic…

  5. Mapping the Structure of Semantic Memory

    ERIC Educational Resources Information Center

    Morais, Ana Sofia; Olsson, Henrik; Schooler, Lael J.

    2013-01-01

    Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual's semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals…

  6. Exploiting Recurring Structure in a Semantic Network

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, Richard M.

    2004-01-01

    With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.

  7. Semantic perception for ground robotics

    NASA Astrophysics Data System (ADS)

    Hebert, M.; Bagnell, J. A.; Bajracharya, M.; Daniilidis, K.; Matthies, L. H.; Mianzo, L.; Navarro-Serment, L.; Shi, J.; Wellfare, M.

    2012-06-01

    Semantic perception involves naming objects and features in the scene, understanding the relations between them, and understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robot's environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of perception systems for UGVs.

  8. Workspaces in the Semantic Web

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, RIchard M.

    2005-01-01

    Due to the recency and relatively limited adoption of Semantic Web technologies. practical issues related to technology scaling have received less attention than foundational issues. Nonetheless, these issues must be addressed if the Semantic Web is to realize its full potential. In particular, we concentrate on the lack of scoping methods that reduce the size of semantic information spaces so they are more efficient to work with and more relevant to an agent's needs. We provide some intuition to motivate the need for such reduced information spaces, called workspaces, give a formal definition, and suggest possible methods of deriving them.

  9. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  10. Les Moyens Audio-Visuels et la Strategie Pedagogique (Audiovisual Methods and Pedagogical Strategy). Melanges Pedagogiques, 1971.

    ERIC Educational Resources Information Center

    Holec, H.

    This article discusses the relationship between audiovisual methods and pedagogical strategies, or between technology and instruction, in second language teaching. Currently, the relationship between audiovisual methods and language instruction is one in which the audiovisual component is subservient, and plays a supplementary rather than a…

  11. Training Methodology, Part IV: Audiovisual Theory, Aids and Equipment. An Annotated Bibliography. Public Health Service Publication No. 1862, Part IV.

    ERIC Educational Resources Information Center

    Health Services and Mental Health Administration (DHEW), Bethesda, MD.

    A total of 332 annotated references pertaining to media aspects of training are organized under the following headings: (1) Audiovisual Theory and Research, (2) Audiovisual Methods (General), (3) Audiovisual Equipment (General), (4) Computers in Instruction, (5) Television Instruction, (6) Videotape Recordings, (7) Television Facilities, (8) Radio…

  12. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have...

  13. A Citation Comparison of Sourcebooks for Audiovisuals to AVLINE Records: Access and the Chief Source of Information.

    ERIC Educational Resources Information Center

    Weimer, Katherine Hart

    1994-01-01

    Discusses cataloging audiovisual materials and the concept of chief source of information and describes a study that compared citations from fully cataloged audiovisual records with their corresponding citations from bibliographic sourcebooks, based on records in AVLINE (National Library of Medicine's Audiovisual On-Line Catalog). Examples of…

  14. Audiovisual Equipment in Educational Facilities Today. AVE in Japan No. 29.

    ERIC Educational Resources Information Center

    Japan Audiovisual Information Center for International Service, Tokyo.

    This report summarizes a 1989 update of a 1986 survey on the diffusion and utilization of audiovisual media and equipment in Japan. A comparison of the two reveals the advancements in types of audiovisual equipment available to schools and social education facilities in Japan which have developed in only 3 years. An outline of the equipment…

  15. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep an..., including captions and published and unpublished catalogs, inventories, indexes, and production files and similar documentation created in the course of audiovisual production. Establish and communicate...

  16. An Analysis of Audiovisual Machines for Individual Program Presentation. Research Memorandum Number Two.

    ERIC Educational Resources Information Center

    Finn, James D.; Weintraub, Royd

    The Medical Information Project (MIP) purpose to select the right type of audiovisual equipment for communicating new medical information to general practitioners of medicine was hampered by numerous difficulties. There is a lack of uniformity and standardization in audiovisual equipment that amounts to chaos. There is no evaluative literature on…

  17. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  18. Age-related audiovisual interactions in the superior colliculus of the rat.

    PubMed

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions.

  19. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and...

  20. Twice upon a time: multiple concurrent temporal recalibrations of audiovisual speech.

    PubMed

    Roseboom, Warrick; Arnold, Derek H

    2011-07-01

    Audiovisual timing perception can recalibrate following prolonged exposure to asynchronous auditory and visual inputs. It has been suggested that this might contribute to achieving perceptual synchrony for auditory and visual signals despite differences in physical and neural signal times for sight and sound. However, given that people can be concurrently exposed to multiple audiovisual stimuli with variable neural signal times, a mechanism that recalibrates all audiovisual timing percepts to a single timing relationship could be dysfunctional. In the experiments reported here, we showed that audiovisual temporal recalibration can be specific for particular audiovisual pairings. Participants were shown alternating movies of male and female actors containing positive and negative temporal asynchronies between the auditory and visual streams. We found that audiovisual synchrony estimates for each actor were shifted toward the preceding audiovisual timing relationship for that actor and that such temporal recalibrations occurred in positive and negative directions concurrently. Our results show that humans can form multiple concurrent estimates of appropriate timing for audiovisual synchrony.

  1. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  2. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    PubMed

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  3. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    ERIC Educational Resources Information Center

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  4. A Management Review and Analysis of Purdue University Libraries and Audio-Visual Center.

    ERIC Educational Resources Information Center

    Baaske, Jan; And Others

    A management review and analysis was conducted by the staff of the libraries and audio-visual center of Purdue University. Not only were the study team and the eight task forces drawn from all levels of the libraries and audio-visual center staff, but a systematic effort was sustained through inquiries, draft reports and open meetings to involve…

  5. Audio-Visual Techniques for Industry. Development and Transfer of Technology Series No. 6.

    ERIC Educational Resources Information Center

    Halas, John; Martin-Harris, Roy

    Intended for use by persons in developing countries responsible for initiating or expanding the use of audiovisual facilities and techniques in industry, this manual is designed for those who have limited background in audiovisuals but need detailed information about how certain techniques may be employed in an economical, efficient way. Part one,…

  6. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false What are the environmental standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public...

  7. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    PubMed Central

    Tse, Regina; Martin, Darren; McLean, Lisa; Cho, Gwi; Hill, Robin; Pickard, Sheila; Aston, Paul; Huang, Chen‐Yu; Makhija, Kuldeep; O'Brien, Ricky; Keall, Paul

    2015-01-01

    Summary This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed. PMID:26247520

  8. Evaluating an Experimental Audio-Visual Module Programmed to Teach a Basic Anatomical and Physiological System.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    The learning efficiency and effectiveness of teaching an anatomical and physiological system to Air Force enlisted trainees utilizing an experimental audiovisual programed module was compared to that of a commercial linear programed text. It was demonstrated that the audiovisual programed approach to training was more efficient than and equally as…

  9. A Team Approach to Developing an Audiovisual Single-Concept Instructional Unit.

    ERIC Educational Resources Information Center

    Brooke, Martha L.; And Others

    1974-01-01

    In 1973, the National Medical Audiovisual Center undertook the production of several audiovisual teaching units, each addressing a single-concept, using a team approach. The production team on the unit "Left Ventricle Catheterization" were a physiologist acting as content specialist, an artist and film producer as production specialist, and an…

  10. Audiovisual Materials in Archives--A General Picture of Their Role and Function.

    ERIC Educational Resources Information Center

    Booms, Hans

    Delivered on behalf of the International Council of Archives (ICA), this paper briefly discusses the challenge inherent in the processing and preservation of audiovisual materials, the types of media included in the term audiovisual, the concerns of professional archivists, the development and services of archival institutions, the utilization of…

  11. Planning Schools for Use of Audio-Visual Materials. No. 1--Classrooms, 3rd Edition.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    Intended to inform school board administrators and teachers of the current (1958) thinking on audio-visual instruction for use in planning new buildings, purchasing equipment, and planning instruction. Attention is given the problem of overcoming obstacles to the incorporation of audio-visual materials into the curriculum. Discussion includes--(1)…

  12. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    ERIC Educational Resources Information Center

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  13. Interactions between mood and the structure of semantic memory: event-related potentials evidence.

    PubMed

    Pinheiro, Ana P; del Re, Elisabetta; Nestor, Paul G; McCarley, Robert W; Gonçalves, Óscar F; Niznikiewicz, Margaret

    2013-06-01

    Recent evidence suggests that affect acts as modulator of cognitive processes and in particular that induced mood has an effect on the way semantic memory is used on-line. We used event-related potentials (ERPs) to examine affective modulation of semantic information processing under three different moods: neutral, positive and negative. Fifteen subjects read 324 pairs of sentences, after mood induction procedure with 30 pictures of neutral, 30 pictures of positive and 30 pictures of neutral valence: 108 sentences were read in each mood induction condition. Sentences ended with three word types: expected words, within-category violations, and between-category violations. N400 amplitude was measured to the three word types under each mood induction condition. Under neutral mood, a congruency (more negative N400 amplitude for unexpected relative to expected endings) and a category effect (more negative N400 amplitude for between- than to within-category violations) were observed. Also, results showed differences in N400 amplitude for both within- and between-category violations as a function of mood: while positive mood tended to facilitate the integration of unexpected but related items, negative mood made their integration as difficult as unexpected and unrelated items. These findings suggest the differential impact of mood on access to long-term semantic memory during sentence comprehension.

  14. Hands typing what hands do: Action-semantic integration dynamics throughout written verb production.

    PubMed

    García, Adolfo M; Ibáñez, Agustín

    2016-04-01

    Processing action verbs, in general, and manual action verbs, in particular, involves activations in gross and hand-specific motor networks, respectively. While this is well established for receptive language processes, no study has explored action-semantic integration during written production. Moreover, little is known about how such crosstalk unfolds from motor planning to execution. Here we address both issues through our novel "action semantics in typing" paradigm, which allows to time keystroke operations during word typing. Specifically, we created a primed-verb-copying task involving manual action verbs, non-manual action verbs, and non-action verbs. Motor planning processes were indexed by first-letter lag (the lapse between target onset and first keystroke), whereas execution dynamics were assessed considering whole-word lag (the lapse between first and last keystroke). Each phase was differently delayed by action verbs. When these were processed for over one second, interference was strong and magnified by effector compatibility during programming, but weak and effector-blind during execution. Instead, when they were processed for less than 900ms, interference was reduced by effector compatibility during programming and it faded during execution. Finally, typing was facilitated by prime-target congruency, irrespective of the verbs' motor content. Thus, action-verb semantics seems to extend beyond its embodied foundations, involving conceptual dynamics not tapped by classical reaction-time measures. These findings are compatible with non-radical models of language embodiment and with predictions of event coding theory. PMID:26803393

  15. Event-related potentials with the Stroop colour-word task: timing of semantic conflict.

    PubMed

    Zurrón, Montserrat; Pouso, María; Lindín, Mónica; Galdo, Santiago; Díaz, Fernando

    2009-06-01

    Event-Related Potentials (ERPs) elicited by congruent and incongruent colour-word stimuli of a Stroop paradigm, in a task in which participants were required to judge the congruence/incongruence of the two dimensions of the stimuli, were recorded in order to study the timing of the semantic conflict. The reaction time to colour-word incongruent stimuli was significantly longer than the reaction time to congruent stimuli (the Stroop effect). A temporal Principal Components Analysis was applied to the data to identify the ERP components. Three positive components were identified in the 300-600 ms interval in response to the congruent and incongruent stimuli: First P3, P3b and PSW. The factor scores corresponding to the First P3 and P3b components were significantly smaller for the incongruent stimuli than for the congruent stimuli. No differences between stimuli were observed in the factor scores corresponding to the PSW or in the ERP latencies. We conclude that the temporal locus of the semantic conflict, which intervenes in generating the Stroop effect, may occur within the time interval in which the First P3 and P3b components are identified, i.e. at approximately 300-450 ms post-stimulus. We suggest that the semantic conflict delays the start of the response selection process, which explains the longer reaction time to incongruent stimuli.

  16. Musical expertise is related to altered functional connectivity during audiovisual integration.

    PubMed

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D; Pantev, Christo

    2015-10-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources' activity, and the corresponding networks were statistically compared. Nonmusicians' results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians' results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity.

  17. Musical expertise is related to altered functional connectivity during audiovisual integration

    PubMed Central

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo

    2015-01-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  18. Seeing and hearing rotated faces: influences of facial orientation on visual and audiovisual speech recognition.

    PubMed

    Jordan, T R; Bevan, K

    1997-04-01

    It is well-known that facial orientation affects the processing of static facial information, but similar effects on the processing of visual speech have yet to be explored fully. Three experiments are reported in which the effects of facial orientation on visual speech processing were examined using a talking face presented at 8 orientations through 360 degrees. Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /ma/, /mi/, /ta/, and /ti/ were used to produce the following speech stimulus types: auditory, visual, congruent audiovisual, and incongruent audiovisual. Facial orientation did not affect identification of visual speed per se or the near-perfect accuracy of auditory speech report with congruent audiovisual speech stimuli. However, facial orientation did affect the accuracy of auditory speech report with incongruent audiovisual speech stimuli. Moreover, the nature of this effect depended on the type of incongruent visual speech used. Implications for the processing of visual and audiovisual speech are discussed. PMID:9104001

  19. Musical expertise is related to altered functional connectivity during audiovisual integration.

    PubMed

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D; Pantev, Christo

    2015-10-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources' activity, and the corresponding networks were statistically compared. Nonmusicians' results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians' results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  20. Problem Solving with General Semantics.

    ERIC Educational Resources Information Center

    Hewson, David

    1996-01-01

    Discusses how to use general semantics formulations to improve problem solving at home or at work--methods come from the areas of artificial intelligence/computer science, engineering, operations research, and psychology. (PA)

  1. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  2. Semantic priming from crowded words.

    PubMed

    Yeh, Su-Ling; He, Sheng; Cavanagh, Patrick

    2012-06-01

    Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.

  3. NASA and The Semantic Web

    NASA Technical Reports Server (NTRS)

    Ashish, Naveen

    2005-01-01

    We provide an overview of several ongoing NASA endeavors based on concepts, systems, and technology from the Semantic Web arena. Indeed NASA has been one of the early adopters of Semantic Web Technology and we describe ongoing and completed R&D efforts for several applications ranging from collaborative systems to airspace information management to enterprise search to scientific information gathering and discovery systems at NASA.

  4. Visual Mislocalization of Moving Objects in an Audiovisual Event

    PubMed Central

    Kawachi, Yousuke

    2016-01-01

    The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD). In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance) were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects’ closest distance biased judgments toward “non-overlapping,” and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments) was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift). These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies. PMID:27111759

  5. Utilization of audio-visual aids by family welfare workers.

    PubMed

    Naik, V R; Jain, P K; Sharma, B B

    1977-01-01

    Communication efforts have been an important component of the Indian Family Planning Welfare Program since its inception. However, its chief interests in its early years were clinical, until the adoption of the extension approach in 1963. Educational materials were developed, especially in the period 1965-8, to fit mass, group meeting and home visit approaches. Audiovisual aids were developed for use by extension workers, who had previously relied entirely on verbal approaches. This paper examines their use. A questionnaire was designed for workers in motivational programs at 3 levels: Village Level (Family Planning Health Assistant, Auxilliary Nurse-Midwife, Dias), Block Level (Public Health Nurse, Lady Health Visitor, Block Extension Educator), and District (District Extension Educator, District Mass Education and Information Officer). 3 Districts were selected from each State on the basis of overall family planning performance during 1970-2 (good, average, or poor). Units of other agencies were also included on the same basis. Findings: 1) Workers in all 3 categories preferred individual contacts over group meetings or mass approach. 2) 56-64% said they used audiovisual aids "sometimes" (when available). 25% said they used them "many times" and only 15.9% said "rarely." 3) More than 1/2 of workers in each category said they were not properly oriented toward the use of audiovisual aids. Nonavailability of the aids in the market was also cited. About 1/3 of village level and 1/2 of other workers said that the materials were heavy and liable to be damaged. Complexity, inaccuracy and confusion in use were not widely cited (less than 30%).

  6. Development of sensitivity to audiovisual temporal asynchrony during midchildhood.

    PubMed

    Kaganovich, Natalya

    2016-02-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7- to 8-year-olds, 10- to 11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether nonverbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2-kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs): 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition), and in the other half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of reaction time (RT) at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10- to 11-year-olds outperforming 7- to 8-year-olds at the 300- to 500-ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function-such as autism, specific language impairment, and dyslexia-may be compared. PMID:26569563

  7. Children's Appreciation of Humor: A Test of the Cognitive-Congruency Principle.

    ERIC Educational Resources Information Center

    McGhee, Paul E.

    According to the cognitive-congruency principle, humor appreciation peaks when the cognitive demands of the stimulus are congruent with the cognitive level of the child. This study tested the principle with jokes based on concepts associated with concrete operational thinking, conservation of mass and weight. This method provides a satisfactory…

  8. Relationships Between Attitude Congruency and Attraction to Candidates in Teacher Selection.

    ERIC Educational Resources Information Center

    Merritt, Daniel L.

    A study explored the relationships between 1) congruence of attitudes between principals and teacher candidates with different job qualifications, and 2) principals' attraction to teacher candidates. Concern was to test relationships expressed in Newcomb's ABX system of interpersonal attraction. In a simulated selection situation, elementary…

  9. Cultural Congruence in the Social Organization of a Reading Lesson with Hawaiian Students.

    ERIC Educational Resources Information Center

    Au, Kathryn Hu-pei

    Microethnographic analysis of a videotaped reading lesson given by a Hawaiian teacher to four Hawaiian second grade students was conducted to determine whether elements of cultural congruence could be identified in the patterns of teacher-pupil interaction. Participation structures in the reading lesson were found to resemble those in talk story,…

  10. Exploring the Congruence between the Lesotho Junior Secondary Geography Curriculum and Environmental Education

    ERIC Educational Resources Information Center

    Raselimo, Mohaeka; Irwin, Pat; Wilmot, Di

    2013-01-01

    In this article, we analyse the Lesotho junior secondary geography curriculum document with the purpose of exploring the congruence between geography and environmental education. The study is based on a curriculum reform process introduced by the Lesotho Environmental Education Support Project (LEESP) in 2001. we draw theoretical insights from…

  11. Congruency as a Nonspecific Perceptual Property Contributing to Newborns' Face Preference

    ERIC Educational Resources Information Center

    Cassia, Viola Macchi; Valenza, Eloisa; Simion, Francesca; Leo, Irene

    2008-01-01

    Past research has shown that top-heaviness is a perceptual property that plays a crucial role in triggering newborns' preference toward faces. The present study examined the contribution of a second configural property, "congruency," to newborns' face preference. Experiments 1 and 2 demonstrated that when embedded in nonfacelike stimuli,…

  12. Sex Role Identity and Career Indecision as Predictors of Holland's Congruence.

    ERIC Educational Resources Information Center

    Eells, Gregory T.; Romans, John S. C.

    A study examined the extent to which sex role identity and career indecision could be used as predictors of individuals' congruence with their environment. Holland's Self-Directed Search, the Bem Sex Role Inventory, and the Career Decision Scale were administered to 84 male and 42 female undergraduates who had declared Animal Science majors at a…

  13. Clinical information systems end user satisfaction: the expectations and needs congruencies effects.

    PubMed

    Karimi, Faezeh; Poo, Danny C C; Tan, Yung Ming

    2015-02-01

    Prior research on information systems (IS) shows that users' attitudes and continuance intentions are associated with their satisfaction with information systems. As such, the increasing amount of investments in clinical information systems (CIS) signifies the importance of understanding CIS end users' (i.e., clinicians) satisfaction. In this study, we develop a conceptual framework to identify the cognitive determinants of clinicians' satisfaction formation. The disconfirmation paradigm serves as the core of the framework. The expectations and needs congruency models are the two models of this paradigm, and perceived performance is the basis of the comparisons in the models. The needs and expectations associated with the models are also specified. The survey methodology is adopted in this study to empirically validate the proposed research model. The survey is conducted at a public hospital and results in 112 and 203 valid responses (56% and 98% response rates) from doctors and nurses respectively. The partial least squares (PLS) method is used to analyze the data. The results of the study show that perceived CIS performance is the most influential factor on clinicians' (i.e., doctors and nurses) satisfaction. Doctors' expectations congruency is the next significant determinant of their satisfaction. Contrary to most previous findings, nurses' expectations and expectations congruency do not show a significant effect on their satisfaction. However, the needs congruency is found to significantly affect nurses' satisfaction. PMID:25542853

  14. Isolating a Mediated Route for Response Congruency Effects in Task Switching

    ERIC Educational Resources Information Center

    Schneider, Darryl W.

    2015-01-01

    Response congruency effects in task switching reflect worse performance for incongruent targets associated with different responses across tasks than for congruent targets associated with the same response. In the present study, the author investigated whether the effects can be produced solely by a mediated route for response selection, whereby…

  15. Examining Congruence within School-Family Partnerships: Definition, Importance, and Current Measurement Approaches

    ERIC Educational Resources Information Center

    Glueck, Courtney L.; Reschly, Amy L.

    2014-01-01

    The purpose of this article is to explore the construct of congruence, particularly with regard to school-family collaboration and partnerships. An in-depth review of the empirical and theoretical literature supporting a shift in focus from encouraging family involvement to creating effective school-family partnerships is presented, followed by an…

  16. Development and Field Test of the Multiple Intelligences Learning Instruction Congruency Impact Scale

    ERIC Educational Resources Information Center

    Peifer, Nancy

    2012-01-01

    The purpose of this study was to contribute to the academic discussion regarding the validity of Multiple Intelligences (MI) theory through focusing on the validity of an important construct embedded in the theory, that of congruence between instructional style and preferred MI style for optimal learning. Currently there is insufficient empirical…

  17. Interpersonal Congruence, Transactive Memory, and Feedback Processes: An Integrative Model of Group Learning

    ERIC Educational Resources Information Center

    London, Manuel; Polzer, Jeffrey T.; Omoregie, Heather

    2005-01-01

    This article presents a multilevel model of group learning that focuses on antecedents and consequences of interpersonal congruence, transactive memory, and feedback processes. The model holds that members' self-verification motives and situational conditions (e.g., member diversity and task demands) give rise to identity negotiation behaviors…

  18. Interaction between Phonemic Abilities and Syllable Congruency Effect in Young Readers

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Mathey, Stephanie

    2013-01-01

    This study investigated whether and to what extent phonemic abilities of young readers (Grade 5) influence syllabic effects in reading. More precisely, the syllable congruency effect was tested in the lexical decision task combined with masked priming in eleven-year-old children. Target words were preceded by a pseudo-word prime sharing the first…

  19. The Effect of Instructional Congruence on Students' Interest towards Learning Science

    ERIC Educational Resources Information Center

    Zain, Ahmad Nurulazam Md

    2010-01-01

    The research examined the effect of a teaching strategy emphasizing on instructional congruence on students' interests towards learning science. This study was conducted in three "low performing" secondary schools in Penang, Malaysia. There were 214 students involved in this study. A questionnaire was utilized to collect data on…

  20. Does Discourse Congruence Influence Spoken Language Comprehension before Lexical Association? Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Boudewyn, Megan A.; Gordon, Peter C.; Long, Debra; Polse, Lara; Swaab, Tamara Y.

    2012-01-01

    The goal of this study was to examine how lexical association and discourse congruence affect the time course of processing incoming words in spoken discourse. In an event-related potential (ERP) norming study, we presented prime-target pairs in the absence of a sentence context to obtain a baseline measure of lexical priming. We observed a…

  1. Congruence of Self-Reported Medications with Pharmacy Prescription Records in Low-Income Older Adults

    ERIC Educational Resources Information Center

    Caskie, Grace I. L.; Willis, Sherry L.

    2004-01-01

    Purpose: This study examined the congruence of self-reported medications with computerized pharmacy records. Design and Methods: Pharmacy records and self-reported medications were obtained for 294 members of a state pharmaceutical assistance program who also participated in ACTIVE, a clinical trial on cognitive training in nondemented elderly…

  2. Person-Environment Congruence and Personality Domains in the Prediction of Job Performance and Work Quality

    ERIC Educational Resources Information Center

    Kieffer, Kevin M.; Schinka, John A.; Curtiss, Glenn

    2004-01-01

    This study examined the contributions of the 5-Factor Model (FFM; P. T. Costa & R. R. McCrae, 1992) and RIASEC (J. L. Holland, 1994) constructs of consistency, differentiation, and person-environment congruence in predicting job performance ratings in a large sample (N = 514) of employees. Hierarchical regression analyses conducted separately by…

  3. The Beck Depression Inventory and Research Diagnostic Criteria: Congruence in an Older Population.

    ERIC Educational Resources Information Center

    Gallagher, Dolores; And Others

    1983-01-01

    Examined the congruence between conventional cutoff scores on the Beck Depression Inventory (BDI) and selected diagnostic classifications of the Research Diagnostic Criteria in a sample of 102 elders seeking psychological treatment. Findings supported the utility of the BDI as a screening instrument for identification of clinically depressed…

  4. The Adolescent-Parent Career Congruence Scale: Development and Initial Validation

    ERIC Educational Resources Information Center

    Sawitri, Dian R.; Creed, Peter A.; Zimmer-Gembeck, Melanie J.

    2013-01-01

    Although there is a growing interest in the discrepancy between parents and their adolescent children in relation to career expectations, there is no existing, psychometrically sound scale that directly measures adolescent-parent career congruence or incongruence. This study reports the development and initial validation of the Adolescent-Parent…

  5. Congruences for a Class of Alternating Lacunary Sums of Binomial Coefficients

    NASA Astrophysics Data System (ADS)

    Dilcher, Karl

    2007-10-01

    An 1876 theorem of Hermite, later extended by Bachmann, gives congruences modulo primes for lacunary sums over the rows of Pascal's triangle. This paper gives an analogous result for alternating sums over a certain class of rows. The proof makes use of properties of certain linear recurrences.

  6. Topic Congruence and Topic Interest: How Do They Affect Second Language Reading Comprehension?

    ERIC Educational Resources Information Center

    Lee, Sang-Ki

    2009-01-01

    Because human memory is largely reconstructive, people tend to reorganize and reevaluate an event in a way that is coherent to the truth values held in their belief system. This study investigated the role of topic congruence (defined as whether the reading content corresponds with readers' prior beliefs towards a contentious topic) in second…

  7. The Effects of Role Congruence and Role Conflict on Work, Marital, and Life Satisfaction

    ERIC Educational Resources Information Center

    Perrone, Kristin M.; Webb, L. Kay; Blalock, Rachel H.

    2005-01-01

    The impact of role congruence and role conflict on work, marital, and life satisfaction was studied using Super's life-span, life-space theory. A conceptual model of relationships between these variables was proposed, and gender differences were examined. Participants were 35 male and 60 female college graduates who completed surveys by mail.…

  8. Cultural Congruence, Strength, and Type: Relationships to Effectiveness. ASHE 1985 Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Cameron, Kim S.

    The relationship between the congruence, strength, and type of organizational culture and organizational effectiveness was studied, based on questionnaire responses by 3,406 administrators, faculty department heads, and trustees from 334 colleges and universities. Respondents rated the extent to which certain characteristics were present at their…

  9. Clinical information systems end user satisfaction: the expectations and needs congruencies effects.

    PubMed

    Karimi, Faezeh; Poo, Danny C C; Tan, Yung Ming

    2015-02-01

    Prior research on information systems (IS) shows that users' attitudes and continuance intentions are associated with their satisfaction with information systems. As such, the increasing amount of investments in clinical information systems (CIS) signifies the importance of understanding CIS end users' (i.e., clinicians) satisfaction. In this study, we develop a conceptual framework to identify the cognitive determinants of clinicians' satisfaction formation. The disconfirmation paradigm serves as the core of the framework. The expectations and needs congruency models are the two models of this paradigm, and perceived performance is the basis of the comparisons in the models. The needs and expectations associated with the models are also specified. The survey methodology is adopted in this study to empirically validate the proposed research model. The survey is conducted at a public hospital and results in 112 and 203 valid responses (56% and 98% response rates) from doctors and nurses respectively. The partial least squares (PLS) method is used to analyze the data. The results of the study show that perceived CIS performance is the most influential factor on clinicians' (i.e., doctors and nurses) satisfaction. Doctors' expectations congruency is the next significant determinant of their satisfaction. Contrary to most previous findings, nurses' expectations and expectations congruency do not show a significant effect on their satisfaction. However, the needs congruency is found to significantly affect nurses' satisfaction.

  10. Examining the Relationships among Coaching Staff Diversity, Perceptions of Diversity, Value Congruence, and Life Satisfaction

    ERIC Educational Resources Information Center

    Cunningham, George B.

    2009-01-01

    The purpose of this study was to examine relationships among coaching staff diversity, perceptions of diversity, value congruence, and life satisfaction. Data were collected from 71 coaching staffs (N = 196 coaches). Observed path analysis was used to examine the study predictions. Results indicate that actual staff diversity was positively…

  11. CONGR: A FORTRAN IV Program to Compute Coefficients of Congruence for Factor Analysis

    ERIC Educational Resources Information Center

    Myers, Donald E.

    1976-01-01

    A Fortran IV program which computes either of the coefficients of congruence (psi or phi) used in comparison of factors in factor analysis is presented. Output consists of a non-symmetric matrix of factor coefficients. Listings of the program, results and test data are available. (Author/JKS)

  12. Life Course Stage in Young Adulthood and Intergenerational Congruence in Family Attitudes

    ERIC Educational Resources Information Center

    Bucx, Freek; Raaijmakers, Quinten; van Wel, Frits

    2010-01-01

    We investigated how intergenerational congruence in family-related attitudes depends on life course stage in young adulthood. Recent data from the Netherlands Kinship Panel Study were used; the present sample included 2,041 dyads of young adults and their parents. Findings are discussed in terms of the elasticity in intergenerational attitude…

  13. The Relationship of Congruence of Spouses' Personal Constructs and Reported Marital Success.

    ERIC Educational Resources Information Center

    Weigel, Richard G.; And Others

    Based on Kelly's theory of Personal Constructs, it was hypothesized that there is a positive relationship between the degree of congruence of spouses Personal Constructs (PCs) and their reported marital success. Twenty-four couples, married from six months to 31 years, volunteered as Ss. To assess PCs, each S was administered a 40-dimension scale…

  14. Person-Environment Interaction in an Evolving Profession: Examining the Congruence and Job Satisfaction of Counselors

    ERIC Educational Resources Information Center

    Beverly, William D., Jr.

    2010-01-01

    John Holland's theory considers congruence between the vocational interests of the individual and characteristics of the work environment to be the primary predictor of job satisfaction and stability. The managed care model has markedly changed the demands of the work environment of mental health counselors. Changes in the way services are…

  15. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. PMID:27131076

  16. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions.

  17. Semantic preview benefit during reading.

    PubMed

    Hohenstein, Sven; Kliegl, Reinhold

    2014-01-01

    Word features in parafoveal vision influence eye movements during reading. The question of whether readers extract semantic information from parafoveal words was studied in 3 experiments by using a gaze-contingent display change technique. Subjects read German sentences containing 1 of several preview words that were replaced by a target word during the saccade to the preview (boundary paradigm). In the 1st experiment the preview word was semantically related or unrelated to the target. Fixation durations on the target were shorter for semantically related than unrelated previews, consistent with a semantic preview benefit. In the 2nd experiment, half the sentences were presented following the rules of German spelling (i.e., previews and targets were printed with an initial capital letter), and the other half were presented completely in lowercase. A semantic preview benefit was obtained under both conditions. In the 3rd experiment, we introduced 2 further preview conditions, an identical word and a pronounceable nonword, while also manipulating the text contrast. Whereas the contrast had negligible effects, fixation durations on the target were reliably different for all 4 types of preview. Semantic preview benefits were greater for pretarget fixations closer to the boundary (large preview space) and, although not as consistently, for long pretarget fixation durations (long preview time). The results constrain theoretical proposals about eye movement control in reading. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  18. Sources of Confusion in Infant Audiovisual Speech Perception Research.

    PubMed

    Shaw, Kathleen E; Bortfeld, Heather

    2015-01-01

    Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners-infants-are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding.

  19. Sources of Confusion in Infant Audiovisual Speech Perception Research

    PubMed Central

    Shaw, Kathleen E.; Bortfeld, Heather

    2015-01-01

    Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners—infants—are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding. PMID:26696919

  20. Effects of spatial congruency on saccade and visual discrimination performance in a dual-task paradigm.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2014-12-01

    The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance.

  1. Semantic photo synthesis

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew; Brostow, G. J.; Shotton, J.; Kwatra, V.; Cipolla, R.

    2007-02-01

    Composite images are synthesized from existing photographs by artists who make concept art, e.g. storyboards for movies or architectural planning. Current techniques allow an artist to fabricate such an image by digitally splicing parts of stock photographs. While these images serve mainly to "quickly" convey how a scene should look, their production is laborious. We propose a technique that allows a person to design a new photograph with substantially less effort. This paper presents a method that generates a composite image when a user types in nouns, such as "boat" and "sand." The artist can optionally design an intended image by specifying other constraints. Our algorithm formulates the constraints as queries to search an automatically annotated image database. The desired photograph, not a collage, is then synthesized using graph-cut optimization, optionally allowing for further user interaction to edit or choose among alternative generated photos. Our results demonstrate our contributions of (1) a method of creating specific images with minimal human effort, and (2) a combined algorithm for automatically building an image library with semantic annotations from any photo collection.

  2. Lexical and context effects in children's audiovisual speech recognition

    NASA Astrophysics Data System (ADS)

    Holt, Rachael; Kirk, Karen; Pisoni, David; Burckhartzmeyer, Lisa; Lin, Anna

    2005-09-01

    The Audiovisual Lexical Neighborhood Sentence Test (AVLNST), a new, recorded speech recognition test for children with sensory aids, was administered in multiple presentation modalities to children with normal hearing and vision. Each sentence consists of three key words whose lexical difficulty is controlled according to the Neighborhood Activation Model (NAM) of spoken word recognition. According to NAM, the recognition of spoken words is influenced by two lexical factors: the frequency of occurrence of individual words in a language, and how phonemically similar the target word is to other words in the listeners lexicon. These predictions are based on auditory similarity only, and thus do not take into account how visual information can influence the perception of speech. Data from the AVLNST, together with those from recorded audiovisual versions of isolated word recognition measures, the Lexical Neighborhood, and the Multisyllabic Lexical Neighborhood Tests, were used to examine the influence of visual information on speech perception in children. Further, the influence of top-down processing on speech recognition was examined by evaluating performance on the recognition of words in isolation versus words in sentences. [Work supported by the American Speech-Language-Hearing Foundation, the American Hearing Research Foundation, and the NIDCD, T32 DC00012 to Indiana University.

  3. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  4. Depth Cues and Perceived Audiovisual Synchrony of Biological Motion

    PubMed Central

    Silva, Carlos César; Mendonça, Catarina; Mouta, Sandra; Silva, Rosa; Campos, José Creissac; Santos, Jorge

    2013-01-01

    Background Due to their different propagation times, visual and auditory signals from external events arrive at the human sensory receptors with a disparate delay. This delay consistently varies with distance, but, despite such variability, most events are perceived as synchronic. There is, however, contradictory data and claims regarding the existence of compensatory mechanisms for distance in simultaneity judgments. Principal Findings In this paper we have used familiar audiovisual events – a visual walker and footstep sounds – and manipulated the number of depth cues. In a simultaneity judgment task we presented a large range of stimulus onset asynchronies corresponding to distances of up to 35 meters. We found an effect of distance over the simultaneity estimates, with greater distances requiring larger stimulus onset asynchronies, and vision always leading. This effect was stronger when both visual and auditory cues were present but was interestingly not found when depth cues were impoverished. Significance These findings reveal that there should be an internal mechanism to compensate for audiovisual delays, which critically depends on the depth information available. PMID:24244617

  5. Information-Driven Active Audio-Visual Source Localization.

    PubMed

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application. PMID:26327619

  6. Head Tracking of Auditory, Visual, and Audio-Visual Targets

    PubMed Central

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2016-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual “bisensory” stimuli. Three metrics were measured—onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952

  7. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  8. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    PubMed

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  9. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    PubMed

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  10. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    PubMed

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  11. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    PubMed

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.

  12. The role of the posterior superior temporal sulcus in audiovisual processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2008-10-01

    In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

  13. Temporal Processing of Audiovisual Stimuli Is Enhanced in Musicians: Evidence from Magnetoencephalography (MEG)

    PubMed Central

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014

  14. Semantic graphs and associative memories.

    PubMed

    Pomi, Andrés; Mizraji, Eduardo

    2004-12-01

    Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.

  15. Semantic graphs and associative memories

    NASA Astrophysics Data System (ADS)

    Pomi, Andrés; Mizraji, Eduardo

    2004-12-01

    Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.

  16. Automated x-ray/light field congruence using the LINAC EPID panel

    SciTech Connect

    Polak, Wojciech; O'Doherty, Jim; Jones, Matt

    2013-03-15

    Purpose: X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. Methods: This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Results: Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. Conclusions: The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be

  17. A Semantic Web Blackboard System

    NASA Astrophysics Data System (ADS)

    McKenzie, Craig; Preece, Alun; Gray, Peter

    In this paper, we propose a Blackboard Architecture as a means for coordinating hybrid reasoning over the Semantic Web. We describe the components of traditional blackboard systems (Knowledge Sources, Blackboard, Controller) and then explain how we have enhanced these by incorporating some of the principles of the Semantic Web to pro- duce our Semantic Web Blackboard. Much of the framework is already in place to facilitate our research: the communication protocol (HTTP); the data representation medium (RDF); a rich expressive description language (OWL); and a method of writing rules (SWRL). We further enhance this by adding our own constraint based formalism (CIF/SWRL) into the mix. We provide an example walk-though of our test-bed system, the AKTive Workgroup Builder and Blackboard(AWB+B), illustrating the interaction and cooperation of the Knowledge Sources and providing some context as to how the solution is achieved. We conclude with the strengths and weaknesses of the architecture.

  18. Neurocybernetic basis of semantic processes.

    PubMed

    Restian, A

    1984-11-01

    Although semantics cannot be reduced to neurophysiology, it must have however a certain neurophysiologic basis and this paper deals with, that neurophysiologic basis which, in fact, has a neurocybernetic basis. The paper first approaches the relations between information and signification and their part within the nervous system's work. Then, it analyses semantic function discoverying neurocybernetic mechanisms which can be proper not only to the conventional signs but also to the objects and phenomena which in turn can play the sign's part. Finally, semantic levels of the nervous system, beginning with the most elementary level of unity, as letters are, and up to the level of the highest ideas and concepts the brain is working with, are described.

  19. Action semantics modulate action prediction.

    PubMed

    Springer, Anne; Prinz, Wolfgang

    2010-11-01

    Previous studies have demonstrated that action prediction involves an internal action simulation that runs time-locked to the real action. The present study replicates and extends these findings by indicating a real-time simulation process (Graf et al., 2007), which can be differentiated from a similarity-based evaluation of internal action representations. Moreover, results showed that action semantics modulate action prediction accuracy. The semantic effect was specified by the processing of action verbs and concrete nouns (Experiment 1) and, more specifically, by the dynamics described by action verbs (Experiment 2) and the speed described by the verbs (e.g., "to catch" vs. "to grasp" vs. "to stretch"; Experiment 3). These results propose a linkage between action simulation and action semantics as two yet unrelated domains, a view that coincides with a recent notion of a close link between motor processes and the understanding of action language.

  20. The semantics of biological forms.

    PubMed

    Albertazzi, Liliana; Canal, Luisa; Dadam, James; Micciolo, Rocco

    2014-01-01

    This study analyses how certain qualitative perceptual appearances of biological forms are correlated with expressions of natural language. Making use of the Osgood semantic differential, we presented the subjects with 32 drawings of biological forms and a list of 10 pairs of connotative adjectives to be put in correlations with them merely by subjective judgments. The principal components analysis made it possible to group the semantics of forms according to two distinct axes of variability: harmony and dynamicity. Specifically, the nonspiculed, nonholed, and flat forms were perceived as harmonic and static; the rounded ones were harmonic and dynamic. The elongated forms were somewhat disharmonious and somewhat static. The results suggest the existence in the general population of a correspondence between perceptual and semantic processes, and of a nonsymbolic relation between visual forms and their adjectival expressions in natural language.

  1. Ontology Matching with Semantic Verification

    PubMed Central

    Jean-Mary, Yves R.; Shironoshita, E. Patrick; Kabuka, Mansur R.

    2009-01-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies. PMID:20186256

  2. Ontology Matching with Semantic Verification.

    PubMed

    Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R

    2009-09-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.

  3. Event Congruency and Episodic Encoding: A Developmental fMRI Study

    ERIC Educational Resources Information Center

    Maril, Anat; Avital, Rinat; Reggev, Niv; Zuckerman, Maya; Sadeh, Talya; Sira, Liat Ben; Livneh, Neta

    2011-01-01

    A known contributor to adults' superior memory performance compared to children is their differential reliance on an existing knowledge base. Compared to those of adults, children's semantic networks are less accessible and less established, a difference that is also thought to contribute to children's relative resistance to semantically related…

  4. Abstraction and natural language semantics.

    PubMed Central

    Kayser, Daniel

    2003-01-01

    According to the traditional view, a word prototypically denotes a class of objects sharing similar features, i.e. it results from an abstraction based on the detection of common properties in perceived entities. I explore here another idea: words result from abstraction of common premises in the rules governing our actions. I first argue that taking 'inference', instead of 'reference', as the basic issue in semantics does matter. I then discuss two phenomena that are, in my opinion, particularly difficult to analyse within the scope of traditional semantic theories: systematic polysemy and plurals. I conclude by a discussion of my approach, and by a summary of its main features. PMID:12903662

  5. Bootstrapping to a Semantic Grid

    SciTech Connect

    Schwidder, Jens; Talbott, Tara; Myers, James D.

    2005-02-28

    The Scientific Annotation Middleware (SAM) is a set of components and services that enable researchers, applications, problem solving environments (PSE) and software agents to create metadata and annotations about data objects and document the semantic relationships between them. Developed starting in 2001, SAM allows applications to encode metadata within files or to manage metadata at the level of individual relationships as desired. SAM then provides mechanisms to expose metadata and relation¬ships encoded either way as WebDAV properties. In this paper, we report on work to further map this metadata into RDF and discuss the role of middleware such as SAM in bridging between traditional and semantic grid applications.

  6. Semantic processing in information retrieval.

    PubMed Central

    Rindflesch, T. C.; Aronson, A. R.

    1993-01-01

    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS domain model, which we exploit extensively, significantly contributes to the feasibility of this processing. PMID:8130547

  7. Order effects in dynamic semantics.

    PubMed

    Graben, Peter Beim

    2014-01-01

    In their target article, Wang and Busemeyer (2013) discuss question order effects in terms of incompatible projectors on a Hilbert space. In a similar vein, Blutner recently presented an orthoalgebraic query language essentially relying on dynamic update semantics. Here, I shall comment on some interesting analogies between the different variants of dynamic semantics and generalized quantum theory to illustrate other kinds of order effects in human cognition, such as belief revision, the resolution of anaphors, and default reasoning that result from the crucial non-commutativity of mental operations upon the belief state of a cognitive agent.

  8. Metasemantics: On the Limits of Semantic Theory

    ERIC Educational Resources Information Center

    Parent, T.

    2009-01-01

    METASEMANTICS is a wake-up call for semantic theory: It reveals that some semantic questions have no adequate answer. (This is meant to be the "epistemic" point that certain semantic questions cannot be "settled"--not a metaphysical point about whether there is a fact-of-the-matter.) METASEMANTICS thus checks our default "optimism" that any…

  9. Chinese Character Decoding: A Semantic Bias?

    ERIC Educational Resources Information Center

    Williams, Clay; Bever, Thomas

    2010-01-01

    The effects of semantic and phonetic radicals on Chinese character decoding were examined. Our results suggest that semantic and phonetic radicals are each available for access when a corresponding task emphasizes one or the other kind of radical. But in a more neutral lexical recognition task, the semantic radical is more informative. Semantic…

  10. Semantic Weight and Verb Retrieval in Aphasia

    ERIC Educational Resources Information Center

    Barde, Laura H. F.; Schwartz, Myrna F.; Boronat, Consuelo B.

    2006-01-01

    Individuals with agrammatic aphasia may have difficulty with verb production in comparison to nouns. Additionally, they may have greater difficulty producing verbs that have fewer semantic components (i.e., are semantically "light") compared to verbs that have greater semantic weight. A connectionist verb-production model proposed by Gordon and…

  11. Semantic Relatedness for Evaluation of Course Equivalencies

    ERIC Educational Resources Information Center

    Yang, Beibei

    2012-01-01

    Semantic relatedness, or its inverse, semantic distance, measures the degree of closeness between two pieces of text determined by their meaning. Related work typically measures semantics based on a sparse knowledge base such as WordNet or Cyc that requires intensive manual efforts to build and maintain. Other work is based on a corpus such as the…

  12. Vertical metaphor with motion and judgment: a valenced congruency effect with fluency.

    PubMed

    Freddi, Sébastien; Cretenet, Joël; Dru, Vincent

    2014-09-01

    Following metaphorical theories of affect, several research studies have shown that the spatial cues along a vertical dimension are useful in qualifying emotional experience (HAPPINESS is UP, SADNESS is DOWN). Three experiments were conducted to examine the role of vertical motion in affective judgment. They showed that positive stimuli moving UPWARD were evaluated more positively than those moving DOWNWARD, whereas negative stimuli moving DOWNWARD were evaluated as less negative than those moving UPWARD. They showed a valenced congruency effect, but an alternative hypothesis in terms of MORE is UP and LESS is DOWN was also examined. Finally, fluency mechanisms were investigated to confirm that relationships between affect and verticality were in accordance with a valenced congruency effect.

  13. The absence of a gender congruency effect in romance languages: a matter of stimulus onset asynchrony?

    PubMed

    Miozzo, Michele; Costa, Albert; Caramazza, Alfonso

    2002-03-01

    Using the picture-word interference paradigm, H. Schriefers and E. Teruel (2000) found that in German the grammatical gender of the distractor word affects the production of phrases composed of article+picture name: Latencies were longer for picture-word pairs of different genders. However, the effect was found only at positive stimulus onset asynchronies (SOAs; i.e., when pictures were presented 75 or 150 ms earlier than word distractors). This gender congruency effect is not obtained in Romance languages. The present article examines whether in these languages, as in German, the effect appears at positive SOAs. No effect was observed in Italian and Spanish at positive SOAs. An account is proposed to explain why the gender congruency effect is obtained in Germanic (Dutch and German) but not in Romance languages.

  14. Attrition from a male batterer treatment program: client-treatment congruence and lifestyle instability.

    PubMed

    Cadsky, O; Hanson, R K; Crawford, M; Lalonde, C

    1996-01-01

    Although patient compliance is a problem for almost all forms of therapy, treatment programs for male batterers face special concerns. Male batterers are often perceived as coming to therapy only because of the external pressures of courts or intimate partners. In the present study, we examined the rates at which male batterers failed to attend treatment following an initial assessment interview. Of the 526 men recommended for treatment, only 218 (41%) attended a single treatment session, and only 132 (25%) completed the brief (10-week) treatment program. The variables associated with attrition fell into two general categories: (a) those associated with lifestyle instability (e.g., moves, unemployment, youthfulness), and (b) those variables indicating a congruence between the clients' self-identified problems and the targets of treatment (e.g., self-admitted problems with spousal assault). Suggestions are provided as to how programs could reduce their attrition rates by attending to the issues of client-treatment congruence and lifestyle instability.

  15. Some infinite families of congruences modulo 3 for 7-core partitions

    NASA Astrophysics Data System (ADS)

    Das, Kuwali

    2016-06-01

    A partition λ is said to be a t-core if and only if it has no hook numbers that are multiples of t. In this paper, we find several new and interesting congruences for 7-core partitions modulo 3 by making use of Ramanujan's theta function identities. We obtain several infinite families of congruences modulo 3 for 7-core partitions. For example, if p ≥ 5 is a prime with (-7/p) =-1 and r ∈ {3, 4, 6}, then for all non-negative integers n and k, a7 (147 . p2k n + 7 . p2k (3r + 1) - 2) ≡ a7 (21 . p2k n + p2k (3r + 1) - 2) (mod 3).

  16. Evaluating and interpreting cross-taxon congruence: Potential pitfalls and solutions

    NASA Astrophysics Data System (ADS)

    Gioria, Margherita; Bacaro, Giovanni; Feehan, John

    2011-05-01

    Characterizing the relationship between different taxonomic groups is critical to identify potential surrogates for biodiversity. Previous studies have shown that cross-taxa relationships are generally weak and/or inconsistent. The difficulties in finding predictive patterns have often been attributed to the spatial and temporal scales of these studies and on the differences in the measure used to evaluate such relationships (species richness versus composition). However, the choice of the analytical approach used to evaluate cross-taxon congruence inevitably represents a major source of variation. Here, we described the use of a range of methods that can be used to comprehensively assess cross-taxa relationships. To do so, we used data for two taxonomic groups, wetland plants and water beetles, collected from 54 farmland ponds in Ireland. Specifically, we used the Pearson correlation and rarefaction curves to analyse patterns in species richness, while Mantel tests, Procrustes analysis, and co-correspondence analysis were used to evaluate congruence in species composition. We compared the results of these analyses and we described some of the potential pitfalls associated with the use of each of these statistical approaches. Cross-taxon congruence was moderate to strong, depending on the choice of the analytical approach, on the nature of the response variable, and on local and environmental conditions. Our findings indicate that multiple approaches and measures of community structure are required for a comprehensive assessment of cross-taxa relationships. In particular, we showed that selection of surrogate taxa in conservation planning should not be based on a single statistic expressing the degree of correlation in species richness or composition. Potential solutions to the analytical issues associated with the assessment of cross-taxon congruence are provided and the implications of our findings in the selection of surrogates for biodiversity are discussed.

  17. Audiovisual speech perception and eye gaze behavior of adults with asperger syndrome.

    PubMed

    Saalasti, Satu; Kätsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-08-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face articulating /k/, the controls predominantly heard /k/. Instead, the AS group heard /k/ and /t/ with almost equal frequency, but with large differences between individuals. There were no differences in gaze direction or unisensory perception between the AS and control participants that could have contributed to the audiovisual differences. We suggest an explanation in terms of weak support from the motor system for audiovisual speech perception in AS.

  18. The development of sensorimotor influences in the audiovisual speech domain: some critical questions.

    PubMed

    Guellaï, Bahia; Streri, Arlette; Yeung, H Henny

    2014-01-01

    Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.

  19. Effects of audio-visual stimulation on the incidence of restraint ulcers on the Wistar rat

    NASA Technical Reports Server (NTRS)

    Martin, M. S.; Martin, F.; Lambert, R.

    1979-01-01

    The role of sensory simulation in restrained rats was investigated. Both mixed audio-visual and pure sound stimuli, ineffective in themselves, were found to cause a significant increase in the incidence of restraint ulcers in the Wistar Rat.

  20. The Evolution of Audio-Visual Education in the USA since 1945.

    ERIC Educational Resources Information Center

    Hitchens, Howard

    1979-01-01

    Explores the development of audiovisual instruction in the United States as an outgrowth of the industrial revolution and the development of more sophisticated communications technology in the mid twentieth century. (RAO)