Sample records for intelligence drive auditory

  1. Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing

    PubMed Central

    Doelling, Keith; Arnal, Luc; Ghitza, Oded; Poeppel, David

    2013-01-01

    A growing body of research suggests that intrinsic neuronal slow (< 10 Hz) oscillations in auditory cortex appear to track incoming speech and other spectro-temporally complex auditory signals. Within this framework, several recent studies have identified critical-band temporal envelopes as the specific acoustic feature being reflected by the phase of these oscillations. However, how this alignment between speech acoustics and neural oscillations might underpin intelligibility is unclear. Here we test the hypothesis that the ‘sharpness’ of temporal fluctuations in the critical band envelope acts as a temporal cue to speech syllabic rate, driving delta-theta rhythms to track the stimulus and facilitate intelligibility. We interpret our findings as evidence that sharp events in the stimulus cause cortical rhythms to re-align and parse the stimulus into syllable-sized chunks for further decoding. Using magnetoencephalographic recordings, we show that by removing temporal fluctuations that occur at the syllabic rate, envelope-tracking activity is reduced. By artificially reinstating these temporal fluctuations, envelope-tracking activity is regained. These changes in tracking correlate with intelligibility of the stimulus. Together, the results suggest that the sharpness of fluctuations in the stimulus, as reflected in the cochlear output, drive oscillatory activity to track and entrain to the stimulus, at its syllabic rate. This process likely facilitates parsing of the stimulus into meaningful chunks appropriate for subsequent decoding, enhancing perception and intelligibility. PMID:23791839

  2. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    PubMed

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Frontal Top-Down Signals Increase Coupling of Auditory Low-Frequency Oscillations to Continuous Speech in Human Listeners

    PubMed Central

    Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim

    2015-01-01

    Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433

  4. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    PubMed

    Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and non-autistic individuals.

  5. Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?

    PubMed Central

    Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive perceptual abilities differently in autistic and non-autistic individuals. PMID:25117450

  6. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    PubMed

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Emotional Intelligence among Auditory, Reading, and Kinesthetic Learning Styles of Elementary School Students in Ambon-Indonesia

    ERIC Educational Resources Information Center

    Leasa, Marleny; Corebima, Aloysius D.; Ibrohim; Suwono, Hadi

    2017-01-01

    Students have unique ways in managing the information in their learning process. VARK learning styles associated with memory are considered to have an effect on emotional intelligence. This quasi-experimental research was conducted to compare the emotional intelligence among the students having auditory, reading, and kinesthetic learning styles in…

  8. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  9. Identification of a pathway for intelligible speech in the left temporal lobe

    PubMed Central

    Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.

    2017-01-01

    Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443

  10. Auditory-Phonetic Projection and Lexical Structure in the Recognition of Sine-Wave Words

    ERIC Educational Resources Information Center

    Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria

    2011-01-01

    Speech remains intelligible despite the elimination of canonical acoustic correlates of phonemes from the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection, although signal-independent properties of lexical neighborhoods also affect intelligibility in utterances…

  11. Effects of speech intelligibility level on concurrent visual task performance.

    PubMed

    Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J

    1994-09-01

    Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.

  12. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  13. Prolonged Walking with a Wearable System Providing Intelligent Auditory Input in People with Parkinson's Disease.

    PubMed

    Ginis, Pieter; Heremans, Elke; Ferrari, Alberto; Dockx, Kim; Canning, Colleen G; Nieuwboer, Alice

    2017-01-01

    Rhythmic auditory cueing is a well-accepted tool for gait rehabilitation in Parkinson's disease (PD), which can now be applied in a performance-adapted fashion due to technological advance. This study investigated the immediate differences on gait during a prolonged, 30 min, walk with performance-adapted (intelligent) auditory cueing and verbal feedback provided by a wearable sensor-based system as alternatives for traditional cueing. Additionally, potential effects on self-perceived fatigue were assessed. Twenty-eight people with PD and 13 age-matched healthy elderly (HE) performed four 30 min walks with a wearable cue and feedback system. In randomized order, participants received: (1) continuous auditory cueing; (2) intelligent cueing (10 metronome beats triggered by a deviating walking rhythm); (3) intelligent feedback (verbal instructions triggered by a deviating walking rhythm); and (4) no external input. Fatigue was self-scored at rest and after walking during each session. The results showed that while HE were able to maintain cadence for 30 min during all conditions, cadence in PD significantly declined without input. With continuous cueing and intelligent feedback people with PD were able to maintain cadence ( p  = 0.04), although they were more physically fatigued than HE. Furthermore, cadence deviated significantly more in people with PD than in HE without input and particularly with intelligent feedback (both: p  = 0.04). In PD, continuous and intelligent cueing induced significantly less deviations of cadence ( p  = 0.006). Altogether, this suggests that intelligent cueing is a suitable alternative for the continuous mode during prolonged walking in PD, as it induced similar effects on gait without generating levels of fatigue beyond that of HE.

  14. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    DTIC Science & Technology

    2016-11-28

    of low spontaneous rate auditory nerve fibers (ANFs) and reduction of auditory brainstem response wave-I amplitudes. The goal of this research is...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram

  15. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    PubMed

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  16. Effect of a concurrent auditory task on visual search performance in a driving-related image-flicker task.

    PubMed

    Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John

    2002-01-01

    The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.

  17. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  18. Speech comprehension aided by multiple modalities: behavioural and neural interactions

    PubMed Central

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.

    2014-01-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262

  19. Speech comprehension aided by multiple modalities: behavioural and neural interactions.

    PubMed

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K

    2012-04-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Predictors of the On-Road Driving Assessment After Traumatic Brain Injury: Comparing Cognitive Tests, Injury Factors, and Demographics.

    PubMed

    McKay, Adam; Liew, Carine; Schönberger, Michael; Ross, Pamela; Ponsford, Jennie

    (1) To examine the relations between performance on cognitive tests and on-road driving assessment in a sample of persons with traumatic brain injury (TBI). (2) To compare cognitive predictors of the on-road assessment with demographic and injury-related predictors. Ninety-nine people with mild-severe TBI who completed an on-road driving assessment in an Australian rehabilitation setting. Retrospective case series. Wechsler Test of Adult Reading or National Adult Reading Test-Revised; 4 subtests from the Wechsler Adult Intelligence Scale-III; Rey Auditory Verbal Leaning Test; Rey Complex Figure Test; Trail Making Test; demographic factors (age, sex, years licensed); and injury-related factors (duration of posttraumatic amnesia; time postinjury). Participants who failed the driving assessment did worse on measures of attention, visual memory, and executive processing; however, cognitive tests were weak correlates (r values <0.3) and poor predictors of the driving assessment. Posttraumatic amnesia duration mediated by time postinjury was the strongest predictor of the driving assessment-that is, participants with more severe TBIs had later driving assessments and were more likely to fail. Cognitive tests are not reliable predictors of the on-road driving assessment outcome. Traumatic brain injury severity may be a better predictor of on-road driving; however, further research is needed to identify the best predictors of driving behavior after TBI.

  1. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram.

    PubMed

    Hossain, Mohammad E; Jassim, Wissam A; Zilany, Muhammad S A

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants.

  2. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram

    PubMed Central

    Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160

  3. Comparing the information conveyed by envelope modulation for speech intelligibility, speech quality, and music quality.

    PubMed

    Kates, James M; Arehart, Kathryn H

    2015-10-01

    This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships.

  4. Comparing the information conveyed by envelope modulation for speech intelligibility, speech quality, and music quality

    PubMed Central

    Kates, James M.; Arehart, Kathryn H.

    2015-01-01

    This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships. PMID:26520329

  5. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    PubMed Central

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  6. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  7. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use

    PubMed Central

    Gieseler, Anja; Tahden, Maike A. S.; Thiel, Christiane M.; Wagener, Kirsten C.; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners (mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age, and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU. PMID:28270784

  8. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use.

    PubMed

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners ( mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age , and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.

  9. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  10. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    PubMed

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  11. Genetic Pleiotropy Explains Associations between Musical Auditory Discrimination and Intelligence

    PubMed Central

    Mosing, Miriam A.; Pedersen, Nancy L.; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions. PMID:25419664

  12. The Chronometry of Mental Ability: An Event-Related Potential Analysis of an Auditory Oddball Discrimination Task

    ERIC Educational Resources Information Center

    Beauchamp, Chris M.; Stelmack, Robert M.

    2006-01-01

    The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…

  13. Predictive Values of Selected Auditory Perceptual Factors in Relation to Measured First Grade Reading Achievement.

    ERIC Educational Resources Information Center

    McNinch, George

    A study was conducted to determine the relationship between auditory perceptual skills and first-grade reading success when readiness and intelligence measures were used in conjunction with auditory skills assessments. Sex differences were also considered. Six boys and six girls were randomly selected from each of 10 first-grade classrooms.…

  14. Speech Intelligibility Predicted from Neural Entrainment of the Speech Envelope.

    PubMed

    Vanthornhout, Jonas; Decruy, Lien; Wouters, Jan; Simon, Jonathan Z; Francart, Tom

    2018-04-01

    Speech intelligibility is currently measured by scoring how well a person can identify a speech signal. The results of such behavioral measures reflect neural processing of the speech signal, but are also influenced by language processing, motivation, and memory. Very often, electrophysiological measures of hearing give insight in the neural processing of sound. However, in most methods, non-speech stimuli are used, making it hard to relate the results to behavioral measures of speech intelligibility. The use of natural running speech as a stimulus in electrophysiological measures of hearing is a paradigm shift which allows to bridge the gap between behavioral and electrophysiological measures. Here, by decoding the speech envelope from the electroencephalogram, and correlating it with the stimulus envelope, we demonstrate an electrophysiological measure of neural processing of running speech. We show that behaviorally measured speech intelligibility is strongly correlated with our electrophysiological measure. Our results pave the way towards an objective and automatic way of assessing neural processing of speech presented through auditory prostheses, reducing confounds such as attention and cognitive capabilities. We anticipate that our electrophysiological measure will allow better differential diagnosis of the auditory system, and will allow the development of closed-loop auditory prostheses that automatically adapt to individual users.

  15. Application of auditory signals to the operation of an agricultural vehicle: results of pilot testing.

    PubMed

    Karimi, D; Mondor, T A; Mann, D D

    2008-01-01

    The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.

  16. Disentangling syntax and intelligibility in auditory language comprehension.

    PubMed

    Friederici, Angela D; Kotz, Sonja A; Scott, Sophie K; Obleser, Jonas

    2010-03-01

    Studies of the neural basis of spoken language comprehension typically focus on aspects of auditory processing by varying signal intelligibility, or on higher-level aspects of language processing such as syntax. Most studies in either of these threads of language research report brain activation including peaks in the superior temporal gyrus (STG) and/or the superior temporal sulcus (STS), but it is not clear why these areas are recruited in functionally different studies. The current fMRI study aims to disentangle the functional neuroanatomy of intelligibility and syntax in an orthogonal design. The data substantiate functional dissociations between STS and STG in the left and right hemispheres: first, manipulations of speech intelligibility yield bilateral mid-anterior STS peak activation, whereas syntactic phrase structure violations elicit strongly left-lateralized mid STG and posterior STS activation. Second, ROI analyses indicate all interactions of speech intelligibility and syntactic correctness to be located in the left frontal and temporal cortex, while the observed right-hemispheric activations reflect less specific responses to intelligibility and syntax. Our data demonstrate that the mid-to-anterior STS activation is associated with increasing speech intelligibility, while the mid-to-posterior STG/STS is more sensitive to syntactic information within the speech. 2009 Wiley-Liss, Inc.

  17. On the relationship between auditory cognition and speech intelligibility in cochlear implant users: An ERP study.

    PubMed

    Finke, Mareike; Büchner, Andreas; Ruigendijk, Esther; Meyer, Martin; Sandmann, Pascale

    2016-07-01

    There is a high degree of variability in speech intelligibility outcomes across cochlear-implant (CI) users. To better understand how auditory cognition affects speech intelligibility with the CI, we performed an electroencephalography study in which we examined the relationship between central auditory processing, cognitive abilities, and speech intelligibility. Postlingually deafened CI users (N=13) and matched normal-hearing (NH) listeners (N=13) performed an oddball task with words presented in different background conditions (quiet, stationary noise, modulated noise). Participants had to categorize words as living (targets) or non-living entities (standards). We also assessed participants' working memory (WM) capacity and verbal abilities. For the oddball task, we found lower hit rates and prolonged response times in CI users when compared with NH listeners. Noise-related prolongation of the N1 amplitude was found for all participants. Further, we observed group-specific modulation effects of event-related potentials (ERPs) as a function of background noise. While NH listeners showed stronger noise-related modulation of the N1 latency, CI users revealed enhanced modulation effects of the N2/N4 latency. In general, higher-order processing (N2/N4, P3) was prolonged in CI users in all background conditions when compared with NH listeners. Longer N2/N4 latency in CI users suggests that these individuals have difficulties to map acoustic-phonetic features to lexical representations. These difficulties seem to be increased for speech-in-noise conditions when compared with speech in quiet background. Correlation analyses showed that shorter ERP latencies were related to enhanced speech intelligibility (N1, N2/N4), better lexical fluency (N1), and lower ratings of listening effort (N2/N4) in CI users. In sum, our findings suggest that CI users and NH listeners differ with regards to both the sensory and the higher-order processing of speech in quiet as well as in noisy background conditions. Our results also revealed that verbal abilities are related to speech processing and speech intelligibility in CI users, confirming the view that auditory cognition plays an important role for CI outcome. We conclude that differences in auditory-cognitive processing contribute to the variability in speech performance outcomes observed in CI users. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Auditory Pitch Perception in Autism Spectrum Disorder Is Associated With Nonverbal Abilities.

    PubMed

    Chowdhury, Rakhee; Sharda, Megha; Foster, Nicholas E V; Germain, Esther; Tryfon, Ana; Doyle-Thomas, Krissy; Anagnostou, Evdokia; Hyde, Krista L

    2017-11-01

    Atypical sensory perception and heterogeneous cognitive profiles are common features of autism spectrum disorder (ASD). However, previous findings on auditory sensory processing in ASD are mixed. Accordingly, auditory perception and its relation to cognitive abilities in ASD remain poorly understood. Here, children with ASD, and age- and intelligence quotient (IQ)-matched typically developing children, were tested on a low- and a higher level pitch processing task. Verbal and nonverbal cognitive abilities were measured using the Wechsler's Abbreviated Scale of Intelligence. There were no group differences in performance on either auditory task or IQ measure. However, there was significant variability in performance on the auditory tasks in both groups that was predicted by nonverbal, not verbal skills. These results suggest that auditory perception is related to nonverbal reasoning rather than verbal abilities in ASD and typically developing children. In addition, these findings provide evidence for preserved pitch processing in school-age children with ASD with average IQ, supporting the idea that there may be a subgroup of individuals with ASD that do not present perceptual or cognitive difficulties. Future directions involve examining whether similar perceptual-cognitive relationships might be observed in a broader sample of individuals with ASD, such as those with language impairment or lower IQ.

  19. The Getting of Wisdom: Fluid Intelligence Does Not Drive Knowledge Acquisition

    ERIC Educational Resources Information Center

    Christensen, Helen; Batterham, Philip J.; Mackinnon, Andrew J.

    2013-01-01

    The investment hypothesis proposes that fluid intelligence drives the accumulation of crystallized intelligence, such that crystallized intelligence increases more substantially in individuals with high rather than low fluid intelligence. However, most investigations have been conducted on adolescent cohorts or in two-wave data sets. There are few…

  20. Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

    PubMed Central

    Henshaw, Helen; Ferguson, Melanie A.

    2013-01-01

    Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss. PMID:23675431

  1. Effect of Three Classroom Listening Conditions on Speech Intelligibility

    ERIC Educational Resources Information Center

    Ross, Mark; Giolas, Thomas G.

    1971-01-01

    Speech discrimination scores for 13 deaf children were obtained in a classroom under: usual listening condition (hearing aid or not), binaural listening situation using auditory trainer/FM receiver with wireless microphone transmitter turned off, and binaural condition with inputs from auditory trainer/FM receiver and wireless microphone/FM…

  2. Speech Perception in Individuals with Auditory Neuropathy

    ERIC Educational Resources Information Center

    Zeng, Fan-Gang; Liu, Sheng

    2006-01-01

    Purpose: Speech perception in participants with auditory neuropathy (AN) was systematically studied to answer the following 2 questions: Does noise present a particular problem for people with AN: Can clear speech and cochlear implants alleviate this problem? Method: The researchers evaluated the advantage in intelligibility of clear speech over…

  3. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    PubMed Central

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on functioning. PMID:26136699

  4. Predicting Academic Success: General Intelligence, "Big Five" Personality Traits, and Work Drive

    ERIC Educational Resources Information Center

    Ridgell, Susan D.; Lounsbury, John W.

    2004-01-01

    General intelligence, Big Five personality traits, and the construct Work Drive were studied in relation to two measures of collegiate academic performance: a single course grade received by undergraduate students in an introductory psychology course, and self-reported GPA. General intelligence and Work Drive were found to be significantly…

  5. Verbal short-term memory span in children: long-term modality dependent effects of intrauterine growth restriction.

    PubMed

    Geva, R; Eshel, R; Leitner, Y; Fattal-Valevski, A; Harel, S

    2008-12-01

    Recent reports showed that children born with intrauterine growth restriction (IUGR) are at greater risk of experiencing verbal short-term memory span (STM) deficits that may impede their learning capacities at school. It is still unknown whether these deficits are modality dependent. This long-term, prospective design study examined modality-dependent verbal STM functions in children who were diagnosed at birth with IUGR (n = 138) and a control group (n = 64). Their STM skills were evaluated individually at 9 years of age with four conditions of the Visual-Aural Digit Span Test (VADS; Koppitz, 1981): auditory-oral, auditory-written, visuospatial-oral and visuospatial-written. Cognitive competence was evaluated with the short form of the Wechsler Intelligence Scales for Children--revised (WISC-R95; Wechsler, 1998). We found IUGR-related specific auditory-oral STM deficits (p < .036) in conjunction with two double dissociations: an auditory-visuospatial (p < .014) and an input-output processing distinction (p < .014). Cognitive competence had a significant effect on all four conditions; however, the effect of IUGR on the auditory-oral condition was not overridden by the effect of intelligence quotient (IQ). Intrauterine growth restriction affects global competence and inter-modality processing, as well as distinct auditory input processing related to verbal STM functions. The findings support a long-term relationship between prenatal aberrant head growth and auditory verbal STM deficits by the end of the first decade of life. Empirical, clinical and educational implications are presented.

  6. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing.

    PubMed

    Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan

    2015-11-01

    There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Short-Term Memory and Auditory Processing Disorders: Concurrent Validity and Clinical Diagnostic Markers

    ERIC Educational Resources Information Center

    Maerlender, Arthur

    2010-01-01

    Auditory processing disorders (APDs) are of interest to educators and clinicians, as they impact school functioning. Little work has been completed to demonstrate how children with APDs perform on clinical tests. In a series of studies, standard clinical (psychometric) tests from the Wechsler Intelligence Scale for Children, Fourth Edition…

  8. The Development of Spoken Language in Deaf Children: Explaining the Unexplained Variance.

    ERIC Educational Resources Information Center

    Musselman, Carol; Kircaali-Iftar, Gonul

    1996-01-01

    This study compared 20 young deaf children with either exceptionally good or exceptionally poor spoken language for their hearing loss, age, and intelligence. Factors associated with high performance included earlier use of binaural ear-level aids, better educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language…

  9. Cognitive mechanisms associated with auditory sensory gating

    PubMed Central

    Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.

    2016-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891

  10. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy

    PubMed Central

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406

  11. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy.

    PubMed

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual's capacity to drive safely. The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely.

  12. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance.

    PubMed

    Liu, Yung-Ching; Jhuang, Jing-Wun

    2012-07-01

    A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.

    PubMed

    Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta

    2009-01-01

    In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.

  14. In-flight speech intelligibility evaluation of a service member with sensorineural hearing loss: case report.

    PubMed

    Casto, Kristen L; Cho, Timothy H

    2012-09-01

    This case report describes the in-flight speech intelligibility evaluation of an aircraft crewmember with pure tone audiometric thresholds that exceed the U.S. Army's flight standards. Results of in-flight speech intelligibility testing highlight the inability to predict functional auditory abilities from pure tone audiometry and underscore the importance of conducting validated functional hearing evaluations to determine aviation fitness-for-duty.

  15. Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus

    NASA Astrophysics Data System (ADS)

    Deng, Zishan; Gao, Yuan; Li, Ting

    2018-02-01

    As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.

  16. EEG alpha spindles and prolonged brake reaction times during auditory distraction in an on-road driving study.

    PubMed

    Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael

    2014-01-01

    Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. EFFECTS AND INTERACTIONS OF AUDITORY AND VISUAL CUES IN ORAL COMMUNICATION.

    ERIC Educational Resources Information Center

    KEYS, JOHN W.; AND OTHERS

    VISUAL AND AUDITORY CUES WERE TESTED, SEPARATELY AND JOINTLY, TO DETERMINE THE DEGREE OF THEIR CONTRIBUTION TO IMPROVING OVERALL SPEECH SKILLS OF THE AURALLY HANDICAPPED. EIGHT SOUND INTENSITY LEVELS (FROM 6 TO 15 DECIBELS) WERE USED IN PRESENTING PHONETICALLY BALANCED WORD LISTS AND MULTIPLE-CHOICE INTELLIGIBILITY LISTS TO A SAMPLE OF 24…

  18. A decrease in brain activation associated with driving when listening to someone speak.

    PubMed

    Just, Marcel Adam; Keller, Timothy A; Cynkar, Jacquelyn

    2008-04-18

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual-task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone.

  19. A Decrease in Brain Activation Associated with Driving When Listening to Someone Speak

    PubMed Central

    Just, Marcel Adam; Keller, Timothy A.; Cynkar, Jacquelyn

    2009-01-01

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone. PMID:18353285

  20. Comprehensive evaluation of a child with an auditory brainstem implant.

    PubMed

    Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S; DesJardin, Jean L; Stika, Carren J; Dzubak, Danielle; Mahalak, Mandy Lutz; Rector, Emily P

    2008-02-01

    We had an opportunity to evaluate an American child whose family traveled to Italy to receive an auditory brainstem implant (ABI). The goal of this evaluation was to obtain insight into possible benefits derived from the ABI and to begin developing assessment protocols for pediatric clinical trials. Case study. Tertiary referral center. Pediatric ABI Patient 1 was born with auditory nerve agenesis. Auditory brainstem implant surgery was performed in December, 2005, in Verona, Italy. The child was assessed at the House Ear Institute, Los Angeles, in July 2006 at the age of 3 years 11 months. Follow-up assessment has continued at the HEAR Center in Birmingham, Alabama. Auditory brainstem implant. Performance was assessed for the domains of audition, speech and language, intelligence and behavior, quality of life, and parental factors. Patient 1 demonstrated detection of sound, speech pattern perception with visual cues, and inconsistent auditory-only vowel discrimination. Language age with signs was approximately 2 years, and vocalizations were increasing. Of normal intelligence, he exhibited attention deficits with difficulty completing structured tasks. Twelve months later, this child was able to identify speech patterns consistently; closed-set word identification was emerging. These results were within the range of performance for a small sample of similarly aged pediatric cochlear implant users. Pediatric ABI assessment with a group of well-selected children is needed to examine risk versus benefit in this population and to analyze whether open-set speech recognition is achievable.

  1. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  2. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  3. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  4. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  5. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  6. Comparisons of Auditory Performance and Speech Intelligibility after Cochlear Implant Reimplantation in Mandarin-Speaking Users

    PubMed Central

    Hwang, Chung-Feng; Ko, Hui-Chen; Tsou, Yung-Ting; Chan, Kai-Chieh; Fang, Hsuan-Yeh; Wu, Che-Ming

    2016-01-01

    Objectives. We evaluated the causes, hearing, and speech performance before and after cochlear implant reimplantation in Mandarin-speaking users. Methods. In total, 589 patients who underwent cochlear implantation in our medical center between 1999 and 2014 were reviewed retrospectively. Data related to demographics, etiologies, implant-related information, complications, and hearing and speech performance were collected. Results. In total, 22 (3.74%) cases were found to have major complications. Infection (n = 12) and hard failure of the device (n = 8) were the most common major complications. Among them, 13 were reimplanted in our hospital. The mean scores of the Categorical Auditory Performance (CAP) and the Speech Intelligibility Rating (SIR) obtained before and after reimplantation were 5.5 versus 5.8 and 3.7 versus 4.3, respectively. The SIR score after reimplantation was significantly better than preoperation. Conclusions. Cochlear implantation is a safe procedure with low rates of postsurgical revisions and device failures. The Mandarin-speaking patients in this study who received reimplantation had restored auditory performance and speech intelligibility after surgery. Device soft failure was rare in our series, calling attention to Mandarin-speaking CI users requiring revision of their implants due to undesirable symptoms or decreasing performance of uncertain cause. PMID:27413753

  7. Audiomotor Perceptual Training Enhances Speech Intelligibility in Background Noise.

    PubMed

    Whitton, Jonathon P; Hancock, Kenneth E; Shannon, Jeffrey M; Polley, Daniel B

    2017-11-06

    Sensory and motor skills can be improved with training, but learning is often restricted to practice stimuli. As an exception, training on closed-loop (CL) sensorimotor interfaces, such as action video games and musical instruments, can impart a broad spectrum of perceptual benefits. Here we ask whether computerized CL auditory training can enhance speech understanding in levels of background noise that approximate a crowded restaurant. Elderly hearing-impaired subjects trained for 8 weeks on a CL game that, like a musical instrument, challenged them to monitor subtle deviations between predicted and actual auditory feedback as they moved their fingertip through a virtual soundscape. We performed our study as a randomized, double-blind, placebo-controlled trial by training other subjects in an auditory working-memory (WM) task. Subjects in both groups improved at their respective auditory tasks and reported comparable expectations for improved speech processing, thereby controlling for placebo effects. Whereas speech intelligibility was unchanged after WM training, subjects in the CL training group could correctly identify 25% more words in spoken sentences or digit sequences presented in high levels of background noise. Numerically, CL audiomotor training provided more than three times the benefit of our subjects' hearing aids for speech processing in noisy listening conditions. Gains in speech intelligibility could be predicted from gameplay accuracy and baseline inhibitory control. However, benefits did not persist in the absence of continuing practice. These studies employ stringent clinical standards to demonstrate that perceptual learning on a computerized audio game can transfer to "real-world" communication challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Auditory “bubbles”: Efficient classification of the spectrotemporal modulations essential for speech intelligibility

    PubMed Central

    Venezia, Jonathan H.; Hickok, Gregory; Richards, Virginia M.

    2016-01-01

    Speech intelligibility depends on the integrity of spectrotemporal patterns in the signal. The current study is concerned with the speech modulation power spectrum (MPS), which is a two-dimensional representation of energy at different combinations of temporal and spectral (i.e., spectrotemporal) modulation rates. A psychophysical procedure was developed to identify the regions of the MPS that contribute to successful reception of auditory sentences. The procedure, based on the two-dimensional image classification technique known as “bubbles” (Gosselin and Schyns (2001). Vision Res. 41, 2261–2271), involves filtering (i.e., degrading) the speech signal by removing parts of the MPS at random, and relating filter patterns to observer performance (keywords identified) over a number of trials. The result is a classification image (CImg) or “perceptual map” that emphasizes regions of the MPS essential for speech intelligibility. This procedure was tested using normal-rate and 2×-time-compressed sentences. The results indicated: (a) CImgs could be reliably estimated in individual listeners in relatively few trials, (b) CImgs tracked changes in spectrotemporal modulation energy induced by time compression, though not completely, indicating that “perceptual maps” deviated from physical stimulus energy, and (c) the bubbles method captured variance in intelligibility not reflected in a common modulation-based intelligibility metric (spectrotemporal modulation index or STMI). PMID:27586738

  9. Speech intelligibility in cerebral palsy children attending an art therapy program.

    PubMed

    Wilk, Magdalena; Pachalska, Maria; Lipowska, Małgorzata; Herman-Sucharska, Izabela; Makarowski, Ryszard; Mirski, Andrzej; Jastrzebowska, Grazyna

    2010-05-01

    Dysarthia is a common sequela of cerebral palsy (CP), directly affecting both the intelligibility of speech and the child's psycho-social adjustment. Speech therapy focused exclusively on the articulatory organs does not always help CP children to speak more intelligibly. The program of art therapy described here has proven to be helpful for these children. From among all the CP children enrolled in our art therapy program from 2005 to 2009, we selected a group of 14 boys and girls (average age 15.3) with severe dysarthria at baseline but no other language or cognitive disturbances. Our retrospective study was based on results from the Auditory Dysarthria Scale and neuropsychological tests for fluency, administered routinely over the 4 months of art therapy. All 14 children in the study group showed some degree of improvement after art therapy in all tested parameters. On the Auditory Dysarthia Scale, highly significant improvements were noted in overall intelligibility (p<0.0001), with significant improvement (p<0.001) in volume, tempo, and control of pauses. The least improvement was noted in the most purely motor parameters. All 14 children also exhibited significant improvement in fluency. Art therapy improves the intelligibility of speech in children with cerebral palsy, even when language functions are not as such the object of therapeutic intervention.

  10. Effects of age and auditory and visual dual tasks on closed-road driving performance.

    PubMed

    Chaparro, Alex; Wood, Joanne M; Carberry, Trent

    2005-08-01

    This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate that multitasking had a significant detrimental impact on driving performance and that cognitive aging was the best predictor of the declines seen in driving performance under dual task conditions. These results have implications for use of mobile phones or in-vehicle navigational devices while driving, especially for older adults.

  11. A physiologically-inspired model reproducing the speech intelligibility benefit in cochlear implant listeners with residual acoustic hearing.

    PubMed

    Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim

    2017-02-01

    This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Design of an intelligent car

    NASA Astrophysics Data System (ADS)

    Na, Yongyi

    2017-03-01

    The design of simple intelligent car, using AT89S52 single chip microcomputer as the car detection and control core; The metal sensor TL - Q5MC induction to iron, to detect the way to send feedback to the signal of single chip microcomputer, make SCM according to the scheduled work mode to control the car in the area according to the predetermined speed, and the operation mode of the microcontroller choose different also can control the car driving along s-shaped iron; Use A44E hall element to detect the car speeds; Adopts 1602 LCD display time of car driving, driving the car to stop, take turns to show the car driving time, distance, average speed and the speed of time. This design has simple structure and is easy to implement, but are highly intelligent, humane, to a certain extent reflects the intelligence.

  13. Noise and communication: a three-year update.

    PubMed

    Brammer, Anthony J; Laroche, Chantal

    2012-01-01

    Noise is omnipresent and impacts us all in many aspects of daily living. Noise can interfere with communication not only in industrial workplaces, but also in other work settings (e.g. open-plan offices, construction, and mining) and within buildings (e.g. residences, arenas, and schools). The interference of noise with communication can have significant social consequences, especially for persons with hearing loss, and may compromise safety (e.g. failure to perceive auditory warning signals), influence worker productivity and learning in children, affect health (e.g. vocal pathology, noise-induced hearing loss), compromise speech privacy, and impact social participation by the elderly. For workers, attempts have been made to: 1) Better define the auditory performance needed to function effectively and to directly measure these abilities when assessing Auditory Fitness for Duty, 2) design hearing protection devices that can improve speech understanding while offering adequate protection against loud noises, and 3) improve speech privacy in open-plan offices. As the elderly are particularly vulnerable to the effects of noise, an understanding of the interplay between auditory, cognitive, and social factors and its effect on speech communication and social participation is also critical. Classroom acoustics and speech intelligibility in children have also gained renewed interest because of the importance of effective speech comprehension in noise on learning. Finally, substantial work has been made in developing models aimed at better predicting speech intelligibility. Despite progress in various fields, the design of alarm signals continues to lag behind advancements in knowledge. This summary of the last three years' research highlights some of the most recent issues for the workplace, for older adults, and for children, as well as the effectiveness of warning sounds and models for predicting speech intelligibility. Suggestions for future work are also discussed.

  14. Distraction and task engagement: How interesting and boring information impact driving performance and subjective and physiological responses.

    PubMed

    Horrey, William J; Lesch, Mary F; Garabet, Angela; Simmons, Lucinda; Maikala, Rammohan

    2017-01-01

    As more devices and services are integrated into vehicles, drivers face new opportunities to perform additional tasks while driving. While many studies have explored the detrimental effects of varying task demands on driving performance, there has been little attention devoted to tasks that vary in terms of personal interest or investment-a quality we liken to the concept of task engagement. The purpose of this study was to explore the impact of task engagement on driving performance, subjective appraisals of performance and workload, and various physiological measurements. In this study, 31 participants (M = 37 yrs) completed three driving conditions in a driving simulator: listening to boring auditory material; listening to interesting material; and driving with no auditory material. Drivers were simultaneously monitored using near-infrared spectroscopy, heart monitoring and eye tracking systems. Drivers exhibited less variability in lane keeping and headway maintenance for both auditory conditions; however, response times to critical braking events were longer in the interesting audio condition. Drivers also perceived the interesting material to be less demanding and less complex, although the material was objectively matched for difficulty. Drivers showed a reduced concentration of cerebral oxygenated hemoglobin when listening to interesting material, compared to baseline and boring conditions, yet they exhibited superior recognition for this material. The practical implications, from a safety standpoint, are discussed. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Intelligent Gate Drive for Fast Switching and Crosstalk Suppression of SiC Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zheyu; Dix, Jeffery; Wang, Fei Fred

    This study presents an intelligent gate drive for silicon carbide (SiC) devices to fully utilize their potential of high switching-speed capability in a phase-leg configuration. Based on the SiC device's intrinsic properties, a gate assist circuit consisting of two auxiliary transistors with two diodes is introduced to actively control gate voltages and gate loop impedances of both devices in a phase-leg configuration during different switching transients. Compared to conventional gate drives, the proposed circuit has the capability of accelerating the switching speed of the phase-leg power devices and suppressing the crosstalk to below device limits. Based on Wolfspeed 1200-V SiCmore » MOSFETs, the test results demonstrate the effectiveness of this intelligent gate drive under varying operating conditions. More importantly, the proposed intelligent gate assist circuitry is embedded into a gate drive integrated circuit, offering a simple, compact, and reliable solution for end-users to maximize benefits of SiC devices in actual power electronics applications.« less

  16. Intelligent Gate Drive for Fast Switching and Crosstalk Suppression of SiC Devices

    DOE PAGES

    Zhang, Zheyu; Dix, Jeffery; Wang, Fei Fred; ...

    2017-01-19

    This study presents an intelligent gate drive for silicon carbide (SiC) devices to fully utilize their potential of high switching-speed capability in a phase-leg configuration. Based on the SiC device's intrinsic properties, a gate assist circuit consisting of two auxiliary transistors with two diodes is introduced to actively control gate voltages and gate loop impedances of both devices in a phase-leg configuration during different switching transients. Compared to conventional gate drives, the proposed circuit has the capability of accelerating the switching speed of the phase-leg power devices and suppressing the crosstalk to below device limits. Based on Wolfspeed 1200-V SiCmore » MOSFETs, the test results demonstrate the effectiveness of this intelligent gate drive under varying operating conditions. More importantly, the proposed intelligent gate assist circuitry is embedded into a gate drive integrated circuit, offering a simple, compact, and reliable solution for end-users to maximize benefits of SiC devices in actual power electronics applications.« less

  17. [Short-term sentence memory in children with auditory processing disorders].

    PubMed

    Kiese-Himmel, C

    2010-05-01

    To compare sentence repetition performance of different groups of children with Auditory Processing Disorders (APD) and to examine the relationship between age or respectively nonverbal intelligence and sentence recall. Nonverbal intelligence was measured with the COLOURED MATRICES, in addition the children completed a standardized test of SENTENCE REPETITION (SR) which requires to repeat spoken sentences (subtest of the HEIDELBERGER SPRACHENTWICKLUNGSTEST). Three clinical groups (n=49 with monosymptomatic APD; n=29 with APD+developmental language impairment; n=14 with APD+developmental dyslexia); two control groups (n=13 typically developing peers without any clinical developmental disorder; n=10 children with slight reduced nonverbal intelligence). The analysis showed a significant group effect (p=0.0007). The best performance was achieved by the normal controls (T-score 52.9; SD 6.4; Min 42; Max 59) followed by children with monosymptomatic APD (43.2; SD 9.2), children with the co-morbid-conditions APD+developmental dyslexia (43.1; SD 10.3), and APD+developmental language impairment (39.4; SD 9.4). The clinical control group presented the lowest performance, on average (38.6; SD 9.6). Accordingly, language-impaired children and children with slight reductions in intelligence could poorly use their grammatical knowledge for SR. A statistically significant improvement in SR was verified with the increase of age with the exception of children belonging to the small group with lowered intelligence. This group comprised the oldest children. Nonverbal intelligence correlated positively with SR only in children with below average-range intelligence (0.62; p=0.054). The absence of APD, SLI as well as the presence of normal intelligence facilitated the use of phonological information for SR.

  18. Impaired Driving

    MedlinePlus

    ... 497 people died in alcohol-impaired driving crashes, accounting for 28% of all traffic-related deaths in ... visual and auditory information processing *Blood Alcohol Concentration Measurement The number of drinks listed represents the approximate ...

  19. Effects of practice on interference from an auditory task while driving : a simulation study

    DOT National Transportation Integrated Search

    2004-12-01

    Experimental research on the effects of cellular phone conversations on driving indicates that the phone task interferes with many driving-related functions, especially with older drivers. Limitations of past research have been that (1) the dual task...

  20. On the balance of envelope and temporal fine structure in the encoding of speech in the early auditory system.

    PubMed

    Shamma, Shihab; Lorenzi, Christian

    2013-05-01

    There is much debate on how the spectrotemporal modulations of speech (or its spectrogram) are encoded in the responses of the auditory nerve, and whether speech intelligibility is best conveyed via the "envelope" (E) or "temporal fine-structure" (TFS) of the neural responses. Wide use of vocoders to resolve this question has commonly assumed that manipulating the amplitude-modulation and frequency-modulation components of the vocoded signal alters the relative importance of E or TFS encoding on the nerve, thus facilitating assessment of their relative importance to intelligibility. Here we argue that this assumption is incorrect, and that the vocoder approach is ineffective in differentially altering the neural E and TFS. In fact, we demonstrate using a simplified model of early auditory processing that both neural E and TFS encode the speech spectrogram with constant and comparable relative effectiveness regardless of the vocoder manipulations. However, we also show that neural TFS cues are less vulnerable than their E counterparts under severe noisy conditions, and hence should play a more prominent role in cochlear stimulation strategies.

  1. Home-based Early Intervention on Auditory and Speech Development in Mandarin-speaking Deaf Infants and Toddlers with Chronological Aged 7-24 Months.

    PubMed

    Yang, Ying; Liu, Yue-Hui; Fu, Ming-Fu; Li, Chun-Lin; Wang, Li-Yan; Wang, Qi; Sun, Xi-Bin

    2015-08-20

    Early auditory and speech development in home-based early intervention of infants and toddlers with hearing loss younger than 2 years are still spare in China. This study aimed to observe the development of auditory and speech in deaf infants and toddlers who were fitted with hearing aids and/or received cochlear implantation between the chronological ages of 7-24 months, and analyze the effect of chronological age and recovery time on auditory and speech development in the course of home-based early intervention. This longitudinal study included 55 hearing impaired children with severe and profound binaural deafness, who were divided into Group A (7-12 months), Group B (13-18 months) and Group C (19-24 months) based on the chronological age. Categories auditory performance (CAP) and speech intelligibility rating scale (SIR) were used to evaluate auditory and speech development at baseline and 3, 6, 9, 12, 18, and 24 months of habilitation. Descriptive statistics were used to describe demographic features and were analyzed by repeated measures analysis of variance. With 24 months of hearing intervention, 78% of the patients were able to understand common phrases and conversation without lip-reading, 96% of the patients were intelligible to a listener. In three groups, children showed the rapid growth of trend features in each period of habilitation. CAP and SIR scores have developed rapidly within 24 months after fitted auxiliary device in Group A, which performed much better auditory and speech abilities than Group B (P < 0.05) and Group C (P < 0.05). Group B achieved better results than Group C, whereas no significant differences were observed between Group B and Group C (P > 0.05). The data suggested the early hearing intervention and home-based habilitation benefit auditory and speech development. Chronological age and recovery time may be major factors for aural verbal outcomes in hearing impaired children. The development of auditory and speech in hearing impaired children may be relatively crucial in thefirst year's habilitation after fitted with the auxiliary device.

  2. Envelope and intensity based prediction of psychoacoustic masking and speech intelligibility.

    PubMed

    Biberger, Thomas; Ewert, Stephan D

    2016-08-01

    Human auditory perception and speech intelligibility have been successfully described based on the two concepts of spectral masking and amplitude modulation (AM) masking. The power-spectrum model (PSM) [Patterson and Moore (1986). Frequency Selectivity in Hearing, pp. 123-177] accounts for effects of spectral masking and critical bandwidth, while the envelope power-spectrum model (EPSM) [Ewert and Dau (2000). J. Acoust. Soc. Am. 108, 1181-1196] has been successfully applied to AM masking and discrimination. Both models extract the long-term (envelope) power to calculate signal-to-noise ratios (SNR). Recently, the EPSM has been applied to speech intelligibility (SI) considering the short-term envelope SNR on various time scales (multi-resolution speech-based envelope power-spectrum model; mr-sEPSM) to account for SI in fluctuating noise [Jørgensen, Ewert, and Dau (2013). J. Acoust. Soc. Am. 134, 436-446]. Here, a generalized auditory model is suggested combining the classical PSM and the mr-sEPSM to jointly account for psychoacoustics and speech intelligibility. The model was extended to consider the local AM depth in conditions with slowly varying signal levels, and the relative role of long-term and short-term SNR was assessed. The suggested generalized power-spectrum model is shown to account for a large variety of psychoacoustic data and to predict speech intelligibility in various types of background noise.

  3. Looming auditory collision warnings for driving.

    PubMed

    Gray, Rob

    2011-02-01

    A driving simulator was used to compare the effectiveness of increasing intensity (looming) auditory warning signals with other types of auditory warnings. Auditory warnings have been shown to speed driver reaction time in rear-end collision situations; however, it is not clear which type of signal is the most effective. Although verbal and symbolic (e.g., a car horn) warnings have faster response times than abstract warnings, they often lead to more response errors. Participants (N=20) experienced four nonlooming auditory warnings (constant intensity, pulsed, ramped, and car horn), three looming auditory warnings ("veridical," "early," and "late"), and a no-warning condition. In 80% of the trials, warnings were activated when a critical response was required, and in 20% of the trials, the warnings were false alarms. For the early (late) looming warnings, the rate of change of intensity signaled a time to collision (TTC) that was shorter (longer) than the actual TTC. Veridical looming and car horn warnings had significantly faster brake reaction times (BRT) compared with the other nonlooming warnings (by 80 to 160 ms). However, the number of braking responses in false alarm conditions was significantly greater for the car horn. BRT increased significantly and systematically as the TTC signaled by the looming warning was changed from early to veridical to late. Looming auditory warnings produce the best combination of response speed and accuracy. The results indicate that looming auditory warnings can be used to effectively warn a driver about an impending collision.

  4. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  5. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  6. Cognitive Processes in Intelligence Analysis: A Descriptive Model and Review of the Literature

    DTIC Science & Technology

    1979-12-01

    vision, hearing , touch,) tion’frquenly ncouterIntefernce In column 2 are the means by which allresulting from unavoidable confusion on reqenty couner... auditory , touch, or senses and makes It available to the muscular sense Inputs outside rest of the cognitive structure, while at awareness and attention...or Ie, the visual to the auditory . change in the sensory Input. The buffer p Shas several characteristics: (The reader may be able to recap- ture

  7. Autonomous vehicles: from paradigms to technology

    NASA Astrophysics Data System (ADS)

    Ionita, Silviu

    2017-10-01

    Mobility is a basic necessity of contemporary society and it is a key factor in global economic development. The basic requirements for the transport of people and goods are: safety and duration of travel, but also a number of additional criteria are very important: energy saving, pollution, passenger comfort. Due to advances in hardware and software, automation has penetrated massively in transport systems both on infrastructure and on vehicles, but man is still the key element in vehicle driving. However, the classic concept of ‘human-in-the-loop’ in terms of ‘hands on’ in driving the cars is competing aside from the self-driving startups working towards so-called ‘Level 4 autonomy’, which is defined as “a self-driving system that does not requires human intervention in most scenarios”. In this paper, a conceptual synthesis of the autonomous vehicle issue is made in connection with the artificial intelligence paradigm. It presents a classification of the tasks that take place during the driving of the vehicle and its modeling from the perspective of traditional control engineering and artificial intelligence. The issue of autonomous vehicle management is addressed on three levels: navigation, movement in traffic, respectively effective maneuver and vehicle dynamics control. Each level is then described in terms of specific tasks, such as: route selection, planning and reconfiguration, recognition of traffic signs and reaction to signaling and traffic events, as well as control of effective speed, distance and direction. The approach will lead to a better understanding of the way technology is moving when talking about autonomous cars, smart/intelligent cars or intelligent transport systems. Keywords: self-driving vehicle, artificial intelligence, deep learning, intelligent transport systems.

  8. A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms

    PubMed Central

    Meiring, Gys Albertus Marthinus; Myburgh, Hermanus Carel

    2015-01-01

    In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced. PMID:26690164

  9. A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms.

    PubMed

    Meiring, Gys Albertus Marthinus; Myburgh, Hermanus Carel

    2015-12-04

    In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.

  10. Multisensory Public Access Catalogs on CD-ROM.

    ERIC Educational Resources Information Center

    Harrison, Nancy; Murphy, Brower

    1987-01-01

    BiblioFile Intelligent Catalog is a CD-ROM-based public access catalog system which incorporates graphics and sound to provide a multisensory interface and artificial intelligence techniques to increase search precision. The system can be updated frequently and inexpensively by linking hard disk drives to CD-ROM optical drives. (MES)

  11. Driving and the Disabled Teenager: What Every Parent Should Know.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1986

    1986-01-01

    Assessment of teenagers with disabilities to determine their ability to learn to drive focuses on vision and perception, auditory perception, reaction time, judgment, need for adaptive aids, and use of a car or specially equipped van. (CL)

  12. Neurocognitive screening of lead-exposed andean adolescents and young adults.

    PubMed

    Counter, S Allen; Buchanan, Leo H; Ortega, Fernando

    2009-01-01

    This study was designed to assess the utility of two psychometric tests with putative minimal cultural bias for use in field screening of lead (Pb)-exposed Ecuadorian Andean workers. Specifically, the study evaluated the effectiveness in Pb-exposed adolescents and young adults of a nonverbal reasoning test standardized for younger children, and compared the findings with performance on a test of auditory memory. The Raven Coloured Progressive Matrices (RCPM) was used as a test of nonverbal intelligence, and the Digit Span subtest of the Wechsler IV intelligence scale was used to assess auditory memory/attention. The participants were 35 chronically Pb-exposed Pb-glazing workers, aged 12-21 yr. Blood lead (PbB) levels for the study group ranged from 3 to 86 microg/dl, with 65.7% of the group at and above 10 microg/dl. Zinc protoporphyrin heme ratios (ZPP/heme) ranged from 38 to 380 micromol/mol, with 57.1% of the participants showing abnormal ZPP/heme (>69 micromol/mol). ZPP/heme was significantly correlated with PbB levels, suggesting chronic Pb exposure. Performance on the RCPM was less than average on the U.S., British, and Puerto Rican norms, but average on the Peruvian norms. Significant inverse associations between PbB/ZPP concentrations and RCPM standard scores using the U.S., Puerto Rican, and Peruvian norms were observed, indicating decreasing RCPM test performance with increasing PbB and ZPP levels. RCPM scores were significantly correlated with performance on the Digit Span test for auditory memory. Mean Digit Span scale score was less than average, suggesting auditory memory/attention deficits. In conclusion, both the RCPM and Digit Span tests were found to be effective instruments for field screening of visual-spatial reasoning and auditory memory abilities, respectively, in Pb-exposed Andean adolescents and young adults.

  13. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    PubMed

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  14. Motivation and intelligence drive auditory perceptual learning.

    PubMed

    Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R

    2010-03-23

    Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.

  15. A decrease in brain activation associated with driving when listening to someone speak

    DOT National Transportation Integrated Search

    2008-02-01

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular : telephone, disrupts driving performance. This study used functional magnetic resonance : imaging (fMRI) to investigate the impact of concurrent auditory ...

  16. Impact of socioeconomic factors on paediatric cochlear implant outcomes.

    PubMed

    Sharma, Shalabh; Bhatia, Khyati; Singh, Satinder; Lahiri, Asish Kumar; Aggarwal, Asha

    2017-11-01

    The study was aimed at evaluating the impact of certain socioeconomic factors such as family income, level of parents' education, distance between the child's home and auditory verbal therapy clinic, and age of the child at implantation on postoperative cochlear implant outcomes. Children suffering from congenital bilateral profound sensorineural hearing loss and a chronologic age of 4 years or younger at the time of implantation were included in the study. Children who were able to complete a prescribed period of a 1-year follow-up were included in the study. These children underwent cochlear implantation surgery, and their postoperative outcomes were measured and documented using categories of auditory perception (CAP), meaningful auditory integration (MAIS), and speech intelligibility rating (SIR) scores. Children were divided into three groups based on the level of parental education, family income, and distance of their home from the rehabilitation-- auditory verbal therapy clinic. A total of 180 children were studied. The age at implantation had a significant impact on the postoperative outcomes, with an inverse correlation. The younger the child's age at the time of implantation, the better were the postoperative outcomes. However, there were no significant differences among the CAP, MAIS, and SIR scores and each of the three subgroups. Children from families with an annual income of less than $7,500, between $7,500 and $15,000, and more than $15,000 performed equally well, except for significantly higher SIR scores in children with family incomes more than $15,000. Children with of parents who had attended high school or possessed a bachelor's or Master's master's degree had similar scores, with no significant difference. Also, distance from the auditory verbal therapy clinic failed to have any significantimpact on a child's performance. These results have been variable, similar to those of previously published studies. A few of the earlier studies concurred with our results, but most of the studies had suggested that children in families of higher socioeconomic status had have better speech and language acquisition. Cochlear implantation significantly improves auditory perception and speech intelligibility of children suffering from profound sensorineural hearing loss. Younger The younger the age at implantation, the better are the results. Hence, early implantation should be promoted and encouraged. Our study suggests that children who followed the designated program of postoperative mapping and auditory verbal therapy for a minimum period of 1 year seemed to do equally well in terms of hearing perception and speech intelligibility, irrespective of the socioeconomic status of the family. Further studies are essential to assess the impact of these factors on long-term speech acquisition andlanguage development. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Validation of auditory detection response task method for assessing the attentional effects of cognitive load.

    PubMed

    Stojmenova, Kristina; Sodnik, Jaka

    2018-07-04

    There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.

  18. Driver compliance to take-over requests with different auditory outputs in conditional automation.

    PubMed

    Forster, Yannick; Naujoks, Frederik; Neukum, Alexandra; Huestegge, Lynn

    2017-12-01

    Conditionally automated driving (CAD) systems are expected to improve traffic safety. Whenever the CAD system exceeds its limit of operation, designers of the system need to ensure a safe and timely enough transition from automated to manual mode. An existing visual Human-Machine Interface (HMI) was supplemented by different auditory outputs. The present work compares the effects of different auditory outputs in form of (1) a generic warning tone and (2) additional semantic speech output on driver behavior for the announcement of an upcoming take-over request (TOR). We expect the information carried by means of speech output to lead to faster reactions and better subjective evaluations by the drivers compared to generic auditory output. To test this assumption, N=17 drivers completed two simulator drives, once with a generic warning tone ('Generic') and once with additional speech output ('Speech+generic'), while they were working on a non-driving related task (NDRT; i.e., reading a magazine). Each drive incorporated one transition from automated to manual mode when yellow secondary lanes emerged. Different reaction time measures, relevant for the take-over process, were assessed. Furthermore, drivers evaluated the complete HMI regarding usefulness, ease of use and perceived visual workload just after experiencing the take-over. They gave comparative ratings on usability and acceptance at the end of the experiment. Results revealed that reaction times, reflecting information processing time (i.e., hands on the steering wheel, termination of NDRT), were shorter for 'Speech+generic' compared to 'Generic' while reaction time, reflecting allocation of attention (i.e., first glance ahead), did not show this difference. Subjective ratings were in favor of the system with additional speech output. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Auditory sensory memory and language abilities in former late talkers: a mismatch negativity study.

    PubMed

    Grossheinrich, Nicola; Kademann, Stefanie; Bruder, Jennifer; Bartling, Juergen; Von Suchodoletz, Waldemar

    2010-09-01

    The present study investigated whether (a) a reduced duration of auditory sensory memory is found in late talking children and (b) whether deficits of sensory memory are linked to persistent difficulties in language acquisition. Former late talkers and children without delayed language development were examined at the age of 4 years and 7 months using mismatch negativity (MMN) with interstimulus intervals (ISIs) of 500 ms and 2000 ms. Additionally, short-term memory, language skills, and nonverbal intelligence were assessed. MMN mean amplitude was reduced for the ISI of 2000 ms in former late talking children both with and without persistent language deficits. In summary, our findings suggest that late talkers are characterized by a reduced duration of auditory sensory memory. However, deficits in auditory sensory memory are not sufficient for persistent language difficulties and may be compensated for by some children.

  20. The simulation of emergent dispatch of cars for intelligent driving autos

    NASA Astrophysics Data System (ADS)

    Zheng, Ziao

    2018-03-01

    It is widely acknowledged that it is important for the development of intelligent cars to be widely accepted by the majority of car users. While most of the intelligent cars have the system of monitoring itself whether it is on the good situation to drive, it is also clear that studies should be performed on the way of cars for the emergent rescue of the intelligent vehicles. In this study, writer focus mainly on how to derive a separate system for the car caring teams to arrive as soon as they get the signal sent out by the intelligent driving autos. This simulation measure the time for the rescuing team to arrive, the cost it spent on arriving on the site of car problem happens, also how long the queue is when the rescuing auto is waiting to cross a road. This can be definitely in great use when there are a team of intelligent cars with one car immediately having problems causing it's not moving and can be helpful in other situations. Through this way, the interconnection of cars can be a safety net for the drivers encountering difficulties in any time.

  1. The comparing analysis of simulation of emergent dispatch of cars for intelligent driving autos in crossroads

    NASA Astrophysics Data System (ADS)

    Zheng, Ziao

    2018-03-01

    It is widely acknowledged that it is important for the development of intelligent cars to be widely accepted by the majority of car users. While most of the intelligent cars have the system of monitoring itself whether it is on the good situation to drive, it is also clear that studies should be performed on the way of cars for the emergent rescue of the intelligent vehicles. In this study, writer focus mainly on how to derive a separate system for the car caring teams to arrive as soon as they get the signal sent out by the intelligent driving autos. This simulation measure the time for the rescuing team to arrive, the cost it spent on arriving on the site of car problem happens, also how long the queue is when the rescuing auto is waiting to cross a road. This can be definitely in great use when there are a team of intelligent cars with one car immediately having problems causing its not moving and can be helpful in other situations. Through this way, the interconnection of cars can be a safety net for the drivers encountering difficulties in any time.

  2. Why is auditory frequency weighting so important in regulation of underwater noise?

    PubMed

    Tougaard, Jakob; Dähne, Michael

    2017-10-01

    A key question related to regulating noise from pile driving, air guns, and sonars is how to take into account the hearing abilities of different animals by means of auditory frequency weighting. Recordings of pile driving sounds, both in the presence and absence of a bubble curtain, were evaluated against recent thresholds for temporary threshold shift (TTS) for harbor porpoises by means of four different weighting functions. The assessed effectivity, expressed as time until TTS, depended strongly on choice of weighting function: 2 orders of magnitude larger for an audiogram-weighted TTS criterion relative to an unweighted criterion, highlighting the importance of selecting the right frequency weighting.

  3. A Psychophysical Evaluation of Spectral Enhancement

    ERIC Educational Resources Information Center

    DiGiovanni, Jeffrey J.; Nelson, Peggy B.; Schlauch, Robert S.

    2005-01-01

    Listeners with sensorineural hearing loss have well-documented elevated hearing thresholds; reduced auditory dynamic ranges; and reduced spectral (or frequency) resolution that may reduce speech intelligibility, especially in the presence of competing sounds. Amplification and amplitude compression partially compensate for elevated thresholds and…

  4. Experimental Evaluation of Performance Feedback Using the Dismounted Infantry Virtual After Action Review System. Long Range Navy and Marine Corps Science and Technology Program

    DTIC Science & Technology

    2007-11-14

    Artificial intelligence and 4 23 education , Volume 1: Learning environments and tutoring systems. Hillsdale, NJ: Erlbaum. Wickens, C.D. (1984). Processing...and how to use it to best optimize the learning process. Some researchers (see Loftin & Savely, 1991) have proposed adding intelligent systems to the...is experienced as the cognitive centers in an individual’s brain process visual, tactile, kinesthetic , olfactory, proprioceptive, and auditory

  5. Event-related potentials and secondary task performance during simulated driving.

    PubMed

    Wester, A E; Böcker, K B E; Volkerts, E R; Verster, J C; Kenemans, J L

    2008-01-01

    Inattention and distraction account for a substantial number of traffic accidents. Therefore, we examined the impact of secondary task performance (an auditory oddball task) on a primary driving task (lane keeping). Twenty healthy participants performed two 20-min tests in the Divided Attention Steering Simulator (DASS). The visual secondary task of the DASS was replaced by an auditory oddball task to allow recording of brain activity. The driving task and the secondary (distracting) oddball task were presented in isolation and simultaneously, to assess their mutual interference. In addition to performance measures (lane keeping in the primary driving task and reaction speed in the secondary oddball task), brain activity, i.e. event-related potentials (ERPs), was recorded. Performance parameters on the driving test and the secondary oddball task did not differ between performance in isolation and simultaneous performance. However, when both tasks were performed simultaneously, reaction time variability increased in the secondary oddball task. Analysis of brain activity indicated that ERP amplitude (P3a amplitude) related to the secondary task, was significantly reduced when the task was performed simultaneously with the driving test. This study shows that when performing a simple secondary task during driving, performance of the driving task and this secondary task are both unaffected. However, analysis of brain activity shows reduced cortical processing of irrelevant, potentially distracting stimuli from the secondary task during driving.

  6. Task-dependent modulation of regions in the left temporal cortex during auditory sentence comprehension.

    PubMed

    Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping

    2015-01-01

    Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Hearing, Auditory Processing, and Language Skills of Male Youth Offenders and Remandees in Youth Justice Residences in New Zealand.

    PubMed

    Lount, Sarah A; Purdy, Suzanne C; Hand, Linda

    2017-01-01

    International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand. Thirty-three male YORs, aged 14-17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test. Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI. Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.

  8. The influence of drinking, texting, and eating on simulated driving performance.

    PubMed

    Irwin, Christopher; Monement, Sophie; Desbrow, Ben

    2015-01-01

    Driving is a complex task and distractions such as using a mobile phone for the purpose of text messaging are known to have a significant impact on driving. Eating and drinking are common forms of distraction that have received less attention in relation to their impact on driving. The aim of this study was to further explore and compare the effects of a variety of distraction tasks (i.e., text messaging, eating, drinking) on simulated driving. Twenty-eight healthy individuals (13 female) participated in a crossover design study involving 3 experimental trials (separated by ≥24 h). In each trial, participants completed a baseline driving task (no distraction) before completing a second driving task involving one of 3 different distraction tasks (drinking 400 mL water, drinking 400 mL water and eating a 6-inch Subway sandwich, drinking 400 mL water and composing 3 text messages). Primary outcome measures of driving consisted of standard deviation of lateral position (SDLP) and reaction time to auditory and visual critical events. Subjective ratings of difficulty in performing the driving tasks were also collected at the end of the study to determine perceptions of distraction difficulty on driving. Driving tasks involving texting and eating were associated with significant impairment in driving performance measures for SDLP compared to baseline driving (46.0 ± 0.08 vs. 41.3 ± 0.06 cm and 44.8 ± 0.10 vs. 41.6 ± 0.07 cm, respectively), number of lane departures compared to baseline driving (10.9 ± 7.8 vs. 7.6 ± 7.1 and 9.4 ± 7.5 vs. 7.1 ± 7.0, respectively), and auditory reaction time compared to baseline driving (922 ± 95 vs. 889 ± 104 ms and 933 ± 101 vs. 901 ± 103 ms, respectively). No difference in SDLP (42.7 ± 0.08 vs. 42.5 ± 0.07 cm), number of lane departures (7.6 ± 7.7 vs. 7.0 ± 6.8), or auditory reaction time (891 ± 98 and 885 ± 89 ms) was observed in the drive involving the drink-only condition compared to the corresponding baseline drive. No difference in reaction time to visual stimuli was observed between baseline and experimental drives for any of the trial conditions. Participants' subjective ratings indicated that they perceived the texting while driving condition to be the most difficult despite similar magnitudes of impairment observed with the eating while driving condition. Distracting behaviors such as eating and texting while driving appear to negatively impact driving measures of lane position control and reaction time. These findings may have direct implications for motorists that engage in these types of distracting behaviors behind the wheel and for the safety of other road users.

  9. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  10. Hearing for Success in the Classroom.

    ERIC Educational Resources Information Center

    Ireland, JoAnn C.; And Others

    1988-01-01

    Hearing-impaired children in mainstreamed classes require assistive listening devices beyond hearing aids, lipreading, and preferential seating. Frequency modulation auditory training devices can improve speech intelligibility and provide an adequate signal-to-noise ratio, and should be incorporated into regular classes containing hearing-impaired…

  11. Sex differences in the development of neuroanatomical functional connectivity underlying intelligence found using Bayesian connectivity analysis.

    PubMed

    Schmithorst, Vincent J; Holland, Scott K

    2007-03-01

    A Bayesian method for functional connectivity analysis was adapted to investigate between-group differences. This method was applied in a large cohort of almost 300 children to investigate differences in boys and girls in the relationship between intelligence and functional connectivity for the task of narrative comprehension. For boys, a greater association was shown between intelligence and the functional connectivity linking Broca's area to auditory processing areas, including Wernicke's areas and the right posterior superior temporal gyrus. For girls, a greater association was shown between intelligence and the functional connectivity linking the left posterior superior temporal gyrus to Wernicke's areas bilaterally. A developmental effect was also seen, with girls displaying a positive correlation with age in the association between intelligence and the functional connectivity linking the right posterior superior temporal gyrus to Wernicke's areas bilaterally. Our results demonstrate a sexual dimorphism in the relationship of functional connectivity to intelligence in children and an increasing reliance on inter-hemispheric connectivity in girls with age.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietrich, K.N.; Succop, P.A.; Berger, O.G.

    This analysis examined the relationship between lead exposure as registered in whole blood (PbB) and the central auditory processing abilities and cognitive developmental status of the Cincinnati cohort (N = 259) at age 5 years. Although the effects were small, higher prenatal, neonatal, and postnatal PbB levels were associated with poorer central auditory processing abilities on the Filtered Word Subtest of the SCAN (a screening test for auditory processing disorders). Higher postnatal PbB levels were associated with poorer performance on all cognitive developmental subscales of the Kaufman Assessment Battery for Children (K-ABC). However, following adjustment for measures of the homemore » environment and maternal intelligence, few statistically or near statistically significant associations remained. Our findings are discussed in the context of the related issues of confounding and the detection of weak associations in high risk populations.« less

  13. Individual differences in adult foreign language learning: the mediating effect of metalinguistic awareness.

    PubMed

    Brooks, Patricia J; Kempe, Vera

    2013-02-01

    In this study, we sought to identify cognitive predictors of individual differences in adult foreign-language learning and to test whether metalinguistic awareness mediated the observed relationships. Using a miniature language-learning paradigm, adults (N = 77) learned Russian vocabulary and grammar (gender agreement and case marking) over six 1-h sessions, completing tasks that encouraged attention to phrases without explicitly teaching grammatical rules. The participants' ability to describe the Russian gender and case-marking patterns mediated the effects of nonverbal intelligence and auditory sequence learning on grammar learning and generalization. Hence, even under implicit-learning conditions, individual differences stemmed from explicit metalinguistic awareness of the underlying grammar, which, in turn, was linked to nonverbal intelligence and auditory sequence learning. Prior knowledge of languages with grammatical gender (predominantly Spanish) predicted learning of gender agreement. Transfer of knowledge of gender from other languages to Russian was not mediated by awareness, which suggests that transfer operates through an implicit process akin to structural priming.

  14. Phase-Locked Responses to Speech in Human Auditory Cortex are Enhanced During Comprehension

    PubMed Central

    Peelle, Jonathan E.; Gross, Joachim; Davis, Matthew H.

    2013-01-01

    A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners’ ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction. PMID:22610394

  15. Phase-locked responses to speech in human auditory cortex are enhanced during comprehension.

    PubMed

    Peelle, Jonathan E; Gross, Joachim; Davis, Matthew H

    2013-06-01

    A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.

  16. The dispersion-focalization theory of sound systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Jean-Luc; Abry, Christian; Boë, Louis-Jean; Vallée, Nathalie; Ménard, Lucie

    2005-04-01

    The Dispersion-Focalization Theory states that sound systems in human languages are shaped by two major perceptual constraints: dispersion driving auditory contrast towards maximal or sufficient values [B. Lindblom, J. Phonetics 18, 135-152 (1990)] and focalization driving auditory spectra towards patterns with close neighboring formants. Dispersion is computed from the sum of the inverse squared inter-spectra distances in the (F1, F2, F3, F4) space, using a non-linear process based on the 3.5 Bark critical distance to estimate F2'. Focalization is based on the idea that close neighboring formants produce vowel spectra with marked peaks, easier to process and memorize in the auditory system. Evidence for increased stability of focal vowels in short-term memory was provided in a discrimination experiment on adult French subjects [J. L. Schwartz and P. Escudier, Speech Comm. 8, 235-259 (1989)]. A reanalysis of infant discrimination data shows that focalization could well be the responsible for recurrent discrimination asymmetries [J. L. Schwartz et al., Speech Comm. (in press)]. Recent data about children vowel production indicate that focalization seems to be part of the perceptual templates driving speech development. The Dispersion-Focalization Theory produces valid predictions for both vowel and consonant systems, in relation with available databases of human languages inventories.

  17. The relative impact of generic head-related transfer functions on auditory speech thresholds: implications for the design of three-dimensional audio displays.

    PubMed

    Arrabito, G R; McFadden, S M; Crabtree, R B

    2001-07-01

    Auditory speech thresholds were measured in this study. Subjects were required to discriminate a female voice recording of three-digit numbers in the presence of diotic speech babble. The voice stimulus was spatialized at 11 static azimuth positions on the horizontal plane using three different head-related transfer functions (HRTFs) measured on individuals who did not participate in this study. The diotic presentation of the voice stimulus served as the control condition. The results showed that two of the HRTFS performed similarly and had significantly lower auditory speech thresholds than the third HRTF. All three HRTFs yielded significantly lower auditory speech thresholds compared with the diotic presentation of the voice stimulus, with the largest difference at 60 degrees azimuth. The practical implications of these results suggest that lower headphone levels of the communication system in military aircraft can be achieved without sacrificing intelligibility, thereby lessening the risk of hearing loss.

  18. Auditory Attentional Control and Selection during Cocktail Party Listening

    PubMed Central

    Hill, Kevin T.

    2010-01-01

    In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments. PMID:19574393

  19. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  20. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    NASA Astrophysics Data System (ADS)

    Herzke, Tobias; Hohmann, Volker

    2005-12-01

    The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test) showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test) showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase in intelligibility resulting from a gain provided by instantaneous compression is as high as from a gain provided by linear amplification. No negative effects of the distortions introduced by the instantaneous compression scheme in terms of speech recognition are observed.

  1. Intelligent single switch wheelchair navigation.

    PubMed

    Ka, Hyun W; Simpson, Richard; Chung, Younghyun

    2012-11-01

    We have developed an intelligent single switch scanning interface and wheelchair navigation assistance system, called intelligent single switch wheelchair navigation (ISSWN), to improve driving safety, comfort and efficiency for individuals who rely on single switch scanning as a control method. ISSWN combines a standard powered wheelchair with a laser rangefinder, a single switch scanning interface and a computer. It provides the user with context sensitive and task specific scanning options that reduce driving effort based on an interpretation of sensor data together with user input. Trials performed by 9 able-bodied participants showed that the system significantly improved driving safety and efficiency in a navigation task by significantly reducing the number of switch presses to 43.5% of traditional single switch wheelchair navigation (p < 0.001). All participants made a significant improvement (39.1%; p < 0.001) in completion time after only two trials.

  2. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  3. Auditory displays as occasion setters.

    PubMed

    Mckeown, Denis; Isherwood, Sarah; Conway, Gareth

    2010-02-01

    The aim of this study was to evaluate whether representational sounds that capture the richness of experience of a collision enhance performance in braking to avoid a collision relative to other forms of warnings in a driving simulator. There is increasing interest in auditory warnings that are informative about their referents. But as well as providing information about some intended object, warnings may be designed to set the occasion for a rich body of information about the outcomes of behavior in a particular context. These richly informative warnings may offer performance advantages, as they may be rapidly processed by users. An auditory occasion setter for a collision (a recording of screeching brakes indicating imminent collision) was compared with two other auditory warnings (an abstract and an "environmental" sound), a speech message, a visual display, and no warning in a fixed-base driving simulator as interfaces to a collision avoidance system. The main measure was braking response times at each of two headways (1.5 s and 3 s) to a lead vehicle. The occasion setter demonstrated statistically significantly faster braking responses at each headway in 8 out of 10 comparisons (with braking responses equally fast to the abstract warning at 1.5 s and the environmental warning at 3 s). Auditory displays that set the occasion for an outcome in a particular setting and for particular behaviors may offer small but critical performance enhancements in time-critical applications. The occasion setter could be applied in settings where speed of response by users is of the essence.

  4. Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy.

    PubMed

    Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S

    2014-03-01

    The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  6. Leveraging Intelligent Vehicle Technologies to Maximize Fuel Economy (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonder, J.

    2011-11-01

    Advancements in vehicle electronics, along with communication and sensing technologies, have led to a growing number of intelligent vehicle applications. Example systems include those for advanced driver information, route planning and prediction, driver assistance, and crash avoidance. The National Renewable Energy Laboratory is exploring ways to leverage intelligent vehicle systems to achieve fuel savings. This presentation discusses several potential applications, such as providing intelligent feedback to drivers on specific ways to improve their driving efficiency, and using information about upcoming driving to optimize electrified vehicle control strategies for maximum energy efficiency and battery life. The talk also covers the potentialmore » of Advanced Driver Assistance Systems (ADAS) and related technologies to deliver significant fuel savings in addition to providing safety and convenience benefits.« less

  7. The musician effect: does it persist under degraded pitch conditions of cochlear implant simulations?

    PubMed Central

    Fuller, Christina D.; Galvin, John J.; Maat, Bert; Free, Rolien H.; Başkent, Deniz

    2014-01-01

    Cochlear implants (CIs) are auditory prostheses that restore hearing via electrical stimulation of the auditory nerve. Compared to normal acoustic hearing, sounds transmitted through the CI are spectro-temporally degraded, causing difficulties in challenging listening tasks such as speech intelligibility in noise and perception of music. In normal hearing (NH), musicians have been shown to better perform than non-musicians in auditory processing and perception, especially for challenging listening tasks. This “musician effect” was attributed to better processing of pitch cues, as well as better overall auditory cognitive functioning in musicians. Does the musician effect persist when pitch cues are degraded, as it would be in signals transmitted through a CI? To answer this question, NH musicians and non-musicians were tested while listening to unprocessed signals or to signals processed by an acoustic CI simulation. The task increasingly depended on pitch perception: (1) speech intelligibility (words and sentences) in quiet or in noise, (2) vocal emotion identification, and (3) melodic contour identification (MCI). For speech perception, there was no musician effect with the unprocessed stimuli, and a small musician effect only for word identification in one noise condition, in the CI simulation. For emotion identification, there was a small musician effect for both. For MCI, there was a large musician effect for both. Overall, the effect was stronger as the importance of pitch in the listening task increased. This suggests that the musician effect may be more rooted in pitch perception, rather than in a global advantage in cognitive processing (in which musicians would have performed better in all tasks). The results further suggest that musical training before (and possibly after) implantation might offer some advantage in pitch processing that could partially benefit speech perception, and more strongly emotion and music perception. PMID:25071428

  8. Shared and distinct factors driving attention and temporal processing across modalities

    PubMed Central

    Berry, Anne S.; Li, Xu; Lin, Ziyong; Lustig, Cindy

    2013-01-01

    In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Block & Zakay, 1997; Meck, 1991; Penney, 2003). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities. PMID:23978664

  9. Command History, 1970. Volume 1. Sanitized

    DTIC Science & Technology

    1970-01-01

    Increased spending would drive up local prices, and the hoarding of rice would drive up food prices in urban areas. The introduction of counterfeit money ...essential assetR of food, money , manpower, concealment, and intelligence which the enemy needed to continue the war. The majority of the population and the...Communist line. (C) The VCI had two missions. The first was to provide military units with the money , food, recruits, intelligence, refuge, and guides without

  10. The Visual Representation and Acquisition of Driving Knowledge for Autonomous Vehicle

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxia; Jiang, Qing; Li, Ping; Song, LiangTu; Wang, Rujing; Yu, Biao; Mei, Tao

    2017-09-01

    In this paper, the driving knowledge base of autonomous vehicle is designed. Based on the driving knowledge modeling system, the driving knowledge of autonomous vehicle is visually acquired, managed, stored, and maintenanced, which has vital significance for creating the development platform of intelligent decision-making systems of automatic driving expert systems for autonomous vehicle.

  11. Influence of auditory fatigue on masked speech intelligibility

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Martens, W. L.; Johnston, P. A.

    1980-01-01

    Intelligibility of PB word lists embedded in simultaneous masking noise was evaluated before and after fatiguing-noise exposure, which was determined by observing the number of words correctly repeated during a shadowing task. Both the speech signal and the masking noise were filtered to a 2825-3185-Hz band. Masking-noise leves were varied from 0- to 90-dB SL. Fatigue was produced by a 1500-3000-Hz octave band of noise at 115 dB (re 20 micron-Pa) presented continuously for 5 min. The results of three experiments indicated that speed intelligibility was reduced when the speech was presented against a background of silence but that the fatiguing-noise exposure had no effect on intelligibility when the speech was made more intense and embedded in masking noise of 40-90-dB SL. These observations are interpreted by considering the recruitment produced by fatigue and masking noise.

  12. H1 antihistamines and driving.

    PubMed

    Popescu, Florin Dan

    2008-01-01

    Driving performances depend on cognitive, psychomotor and perception functions. The CNS adverse effects of some H1 antihistamines can alter the patient ability to drive. Data from studies using standardized objective cognitive and psychomotor tests (Choice Reaction Time, Critical Flicker Fusion. Digital Symbol Substitution Test), functional brain imaging (Positron Emission Tomography, functional Magnetic Resonance Imaging), neurophysiological studies (Multiple Sleep Latency Test, auditory and visual evoked potentials), experimental simulated driving (driving simulators) and real driving studies (the Highway Driving Test, with the evaluation of the Standard Deviation Lateral Position, and the Car Following Test, with the measurement of the Brake Reaction Time) must be discussed in order to classify a H1 antihistamine as a true non-sedating one.

  13. H1 antihistamines and driving

    PubMed Central

    Florin-Dan, Popescu

    2008-01-01

    Driving performances depend on cognitive, psychomotor and perception functions. The CNS adverse effects of some H1 antihistamines can alter the patient ability to drive. Data from studies using standardized objective cognitive and psychomotor tests (Choice Reaction Time, Critical Flicker Fusion, Digital Symbol Substitution Test), functional brain imaging (Positron Emission Tomography, functional Magnetic Resonance Imaging), neurophysiological studies (Multiple Sleep Latency Test, auditory and visual evoked potentials), experimental simulated driving (driving simulators) and real driving studies (the Highway Driving Test, with the evaluation of the Standard Deviation Lateral Position, and the Car Following Test, with the measurement of the Brake Reaction Time) must be discussed in order to classify a H1 antihistamine as a true non-sedating one. PMID:20108503

  14. A Circuit for Motor Cortical Modulation of Auditory Cortical Activity

    PubMed Central

    Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan

    2013-01-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287

  15. Working Memory and Fluid Intelligence in Young Children

    ERIC Educational Resources Information Center

    Engel de Abreu, Pascale M. J.; Conway, Andrew R. A.; Gathercole, Susan E.

    2010-01-01

    The present study investigates how working memory and fluid intelligence are related in young children and how these links develop over time. The major aim is to determine which aspect of the working memory system--short-term storage or cognitive control--drives the relationship with fluid intelligence. A sample of 119 children was followed from…

  16. An Intelligent Use for Belief

    ERIC Educational Resources Information Center

    Aborn, Matt

    2006-01-01

    Over the last three decades there has been a major shift in how practicing educators think about intelligence. One great driving force of this change can be attributed to "Frames of Mind: Theory of Multiple Intelligences," written by Howard Gardner in 1983. Gardner's book is conceived around the premise that every human being maintains seven (now…

  17. Distracted driving in elderly and middle-aged drivers.

    PubMed

    Thompson, Kelsey R; Johnson, Amy M; Emerson, Jamie L; Dawson, Jeffrey D; Boer, Erwin R; Rizzo, Matthew

    2012-03-01

    Automobile driving is a safety-critical real-world example of multitasking. A variety of roadway and in-vehicle distracter tasks create information processing loads that compete for the neural resources needed to drive safely. Drivers with mind and brain aging may be particularly susceptible to distraction due to waning cognitive resources and control over attention. This study examined distracted driving performance in an instrumented vehicle (IV) in 86 elderly (mean=72.5 years, SD=5.0 years) and 51 middle-aged drivers (mean=53.7 years, SD=9.3 year) under a concurrent auditory-verbal processing load created by the Paced Auditory Serial Addition Task (PASAT). Compared to baseline (no-task) driving performance, distraction was associated with reduced steering control in both groups, with middle-aged drivers showing a greater increase in steering variability. The elderly drove slower and showed decreased speed variability during distraction compared to middle-aged drivers. They also tended to "freeze up", spending significantly more time holding the gas pedal steady, another tactic that may mitigate time pressured integration and control of information, thereby freeing mental resources to maintain situation awareness. While 39% of elderly and 43% of middle-aged drivers committed significantly more driving safety errors during distraction, 28% and 18%, respectively, actually improved, compatible with allocation of attention resources to safety critical tasks under a cognitive load. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Distracted Driving in Elderly and Middle-Aged Drivers

    PubMed Central

    Thompson, Kelsey R.; Johnson, Amy M.; Emerson, Jamie L.; Dawson, Jeffrey D.; Boer, Erwin R.

    2011-01-01

    Automobile driving is a safety-critical real-world example of multitasking. A variety of roadway and in-vehicle distracter tasks create information processing loads that compete for the neural resources needed to drive safely. Drivers with mind and brain aging may be particularly susceptible to distraction due to waning cognitive resources and control over attention. This study examined distracted driving performance in an instrumented vehicle (IV) in 86 elderly (mean = 72.5 years, SD = 5.0 years) and 51 middle-aged drivers (mean = 53.7 years, SD = 9.3 year) under a concurrent auditory-verbal processing load created by the Paced Auditory Serial Addition Task (PASAT). Compared to baseline (no-task) driving performance, distraction was associated with reduced steering control in both groups, with middle-aged drivers showing a greater increase in steering variability. The elderly drove slower and showed decreased speed variability during distraction compared to middle-aged drivers. They also tended to “freeze up”, spending significantly more time holding the gas pedal steady, another tactic that may mitigate time pressured integration and control of information, thereby freeing mental resources to maintain situation awareness. While 39% of elderly and 43% of middle-aged drivers committed significantly more driving safety errors during distraction, 28% and 18%, respectively, actually improved, compatible with allocation of attention resources to safety critical tasks under a cognitive load. PMID:22269561

  19. Useful field of view predicts driving in the presence of distracters.

    PubMed

    Wood, Joanne M; Chaparro, Alex; Lacherez, Philippe; Hickson, Louise

    2012-04-01

    The Useful Field of View (UFOV) test has been shown to be highly effective in predicting crash risk among older adults. An important question which we examined in this study is whether this association is due to the ability of the UFOV to predict difficulties in attention-demanding driving situations that involve either visual or auditory distracters. Participants included 92 community-living adults (mean age 73.6 ± 5.4 years; range 65-88 years) who completed all three subtests of the UFOV involving assessment of visual processing speed (subtest 1), divided attention (subtest 2), and selective attention (subtest 3); driving safety risk was also classified using the UFOV scoring system. Driving performance was assessed separately on a closed-road circuit while driving under three conditions: no distracters, visual distracters, and auditory distracters. Driving outcome measures included road sign recognition, hazard detection, gap perception, time to complete the course, and performance on the distracter tasks. Those rated as safe on the UFOV (safety rating categories 1 and 2), as well as those responding faster than the recommended cut-off on the selective attention subtest (350 msec), performed significantly better in terms of overall driving performance and also experienced less interference from distracters. Of the three UFOV subtests, the selective attention subtest best predicted overall driving performance in the presence of distracters. Older adults who were rated as higher risk on the UFOV, particularly on the selective attention subtest, demonstrated poorest driving performance in the presence of distracters. This finding suggests that the selective attention subtest of the UFOV may be differentially more effective in predicting driving difficulties in situations of divided attention which are commonly associated with crashes.

  20. WISC-R Scatter and Patterns in Three Types of Learning Disabled Children.

    ERIC Educational Resources Information Center

    Tabachnick, Barbara G.; Turbey, Carolyn B.

    Wechsler Intelligence Scale for Children-Revised (WISC-R) subtest scatter and Bannatyne recategorization scores were investigated with three types of learning disabilities in children 6 to 16 years old: visual-motor and visual-perceptual disability (N=66); auditory-perceptual and receptive language deficit (N=18); and memory deficit (N=12). Three…

  1. Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study

    ERIC Educational Resources Information Center

    Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle

    2012-01-01

    In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…

  2. Detecting and Quantifying Mind Wandering during Simulated Driving.

    PubMed

    Baldwin, Carryl L; Roberts, Daniel M; Barragan, Daniela; Lee, John D; Lerner, Neil; Higgins, James S

    2017-01-01

    Mind wandering is a pervasive threat to transportation safety, potentially accounting for a substantial number of crashes and fatalities. In the current study, mind wandering was induced through completion of the same task for 5 days, consisting of a 20-min monotonous freeway-driving scenario, a cognitive depletion task, and a repetition of the 20-min driving scenario driven in the reverse direction. Participants were periodically probed with auditory tones to self-report whether they were mind wandering or focused on the driving task. Self-reported mind wandering frequency was high, and did not statistically change over days of participation. For measures of driving performance, participant labeled periods of mind wandering were associated with reduced speed and reduced lane variability, in comparison to periods of on task performance. For measures of electrophysiology, periods of mind wandering were associated with increased power in the alpha band of the electroencephalogram (EEG), as well as a reduction in the magnitude of the P3a component of the event related potential (ERP) in response to the auditory probe. Results support that mind wandering has an impact on driving performance and the associated change in driver's attentional state is detectable in underlying brain physiology. Further, results suggest that detecting the internal cognitive state of humans is possible in a continuous task such as automobile driving. Identifying periods of likely mind wandering could serve as a useful research tool for assessment of driver attention, and could potentially lead to future in-vehicle safety countermeasures.

  3. Detecting and Quantifying Mind Wandering during Simulated Driving

    PubMed Central

    Baldwin, Carryl L.; Roberts, Daniel M.; Barragan, Daniela; Lee, John D.; Lerner, Neil; Higgins, James S.

    2017-01-01

    Mind wandering is a pervasive threat to transportation safety, potentially accounting for a substantial number of crashes and fatalities. In the current study, mind wandering was induced through completion of the same task for 5 days, consisting of a 20-min monotonous freeway-driving scenario, a cognitive depletion task, and a repetition of the 20-min driving scenario driven in the reverse direction. Participants were periodically probed with auditory tones to self-report whether they were mind wandering or focused on the driving task. Self-reported mind wandering frequency was high, and did not statistically change over days of participation. For measures of driving performance, participant labeled periods of mind wandering were associated with reduced speed and reduced lane variability, in comparison to periods of on task performance. For measures of electrophysiology, periods of mind wandering were associated with increased power in the alpha band of the electroencephalogram (EEG), as well as a reduction in the magnitude of the P3a component of the event related potential (ERP) in response to the auditory probe. Results support that mind wandering has an impact on driving performance and the associated change in driver’s attentional state is detectable in underlying brain physiology. Further, results suggest that detecting the internal cognitive state of humans is possible in a continuous task such as automobile driving. Identifying periods of likely mind wandering could serve as a useful research tool for assessment of driver attention, and could potentially lead to future in-vehicle safety countermeasures. PMID:28848414

  4. Study on environmental test technology of LiDAR used for vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Yang, Jianfeng; Ou, Yong

    2018-03-01

    With the development of intelligent driving, the LiDAR used for vehicle plays an important role in it, in some extent LiDAR is the key factor of intelligent driving. And environmental adaptability is one critical factor of quality, it relates success or failure of LiDAR. This article discusses about the environment and its effects on LiDAR used for vehicle, it includes analysis of any possible environment that vehicle experiences, and environmental test design.

  5. Comparing the effect of auditory-only and auditory-visual modes in two groups of Persian children using cochlear implants: a randomized clinical trial.

    PubMed

    Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam

    2013-09-01

    Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Rapid tuning shifts in human auditory cortex enhance speech intelligibility

    PubMed Central

    Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.

    2016-01-01

    Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965

  7. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    PubMed

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Music and language: relations and disconnections.

    PubMed

    Kraus, Nina; Slater, Jessica

    2015-01-01

    Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.

  9. Auditory salience using natural soundscapes.

    PubMed

    Huang, Nicholas; Elhilali, Mounya

    2017-03-01

    Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.

  10. iDriving (Intelligent Driving)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malikopoulos, Andreas

    2012-09-17

    iDriving identifies the driving style factors that have a major impact on fuel economy. An optimization framework is used with the aim of optimizing a driving style with respect to these driving factors. A set of polynomial metamodels is constructed to reflect the responses produced in fuel economy by changing the driving factors. The optimization framework is used to develop a real-time feedback system, including visual instructions, to enable drivers to alter their driving styles in responses to actual driving conditions to improve fuel efficiency.

  11. Perceptual elements in brain mechanisms of acoustic communication in humans and nonhuman primates.

    PubMed

    Reser, David H; Rosa, Marcello

    2014-12-01

    Ackermann et al. outline a model for elaboration of subcortical motor outputs as a driving force for the development of the apparently unique behaviour of language in humans. They emphasize circuits in the striatum and midbrain, and acknowledge, but do not explore, the importance of the auditory perceptual pathway for evolution of verbal communication. We suggest that understanding the evolution of language will also require understanding of vocalization perception, especially in the auditory cortex.

  12. The impact of distraction mitigation strategies on driving performance.

    PubMed

    Donmez, Birsen; Boyle, Linda Ng; Lee, John D

    2006-01-01

    An experiment was conducted to assess the effects of distraction mitigation strategies on drivers' performance and productivity while engaged in an in-vehicle information system task. Previous studies show that in-vehicle tasks undermine driver safety and there is a need to mitigate driver distraction. An advising strategy that alerts drivers to potential dangers and a locking strategy that prevents the driver from continuing the distracting task were presented to 16 middle-aged and 12 older drivers in a driving simulator in two modes (auditory, visual) and two road conditions (curves, braking events). Distraction was a problem for both age groups. Visual distractions were more detrimental than auditory ones for curve negotiation, as depicted by more erratic steering, F (6, 155) = 26.76, p < .05. Drivers did brake more abruptly under auditory distractions, but this effect was mitigated by both the advising, t (155) = 8.37, p < .05, and locking strategies, t (155) = 8.49, p < .05. The locking strategy also resulted in longer minimum time to collision for middle-aged drivers engaged in visual distractions, F (6, 138) = 2.43, p < .05. Adaptive interfaces can reduce abrupt braking on curve entries resulting from auditory distractions and can also improve the braking response for distracted drivers. These strategies can be incorporated into existing in-vehicle systems, thus mitigating the effects of distraction and improving driver performance.

  13. Comparison of Social Interaction between Cochlear-Implanted Children with Normal Intelligence Undergoing Auditory Verbal Therapy and Normal-Hearing Children: A Pilot Study.

    PubMed

    Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh

    2018-04-01

    A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.

  14. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility

    PubMed Central

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise. PMID:25566159

  16. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility.

    PubMed

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.

  17. The effect of learning style on academic student success

    NASA Astrophysics Data System (ADS)

    Stackhouse, Omega N.

    The problem addressed in this study was that little was known about the impact on student academic achievement, when grouped by learning style, in a multiple intelligence based science curriculum. The larger problem was that many students were frequently unengaged and, consequently, low achieving in their science courses. This quantitative study used an ex post facto research design to better understand the impact of student learning style on the academic success of students in a Multiple Intelligence Theory based course room. Gardner's work on Multiple Intelligence served as the conceptual framework for this study. The research question for this study asked if academic instruction that employs multiple intelligence theories has a relationship with students' academic achievement differently according to their learning style group (auditory, visual, and kinesthetic). Existing data from 85 students were placed into 1 of 3 groups: (a) Auditory, (b) Visual, or (c) Kinesthetic Learning Style) using existing data from a student inventory instrument. The independent variable was existing data from student inventories of learning style and the dependent variable was existing student scores from the Physical Science End of Course Test. Existing data were taken from students that were all taught with the same strategies in similar classroom environments. The Physical Science End of Course Test was developed with stringent measures to protect validity by the developer, McGraw-Hill. Cronbach's Alpha was conducted to determine the internal reliability coefficient of the student inventory. The impact for social change is that adding to the body of knowledge regarding student learning style and science curriculum provides valuable information for teachers, administrators, and school policy makers. This will allow teachers to better prepare to engage their students' and to prepare them for their place in society.

  18. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: an ERP study.

    PubMed

    van Zuijen, Titia L; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-10-18

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or without familial risk for dyslexia to investigate whether their auditory system was able to detect a temporal change in a tone pattern. The children were followed longitudinally and performed an intelligence- and language development test at ages 4 and 4.5 years. Literacy related skills were measured at the beginning of second grade, and word- and pseudo-word reading fluency were measured at the end of second grade. The EEG responses showed that control children could detect the temporal change as indicated by a mismatch response (MMR). The MMR was not observed in at-risk children. Furthermore, the fronto-central MMR amplitude correlated with preliterate language comprehension and with later word reading fluency, but not with phonological awareness. We conclude that temporal auditory processing differentiates young children at risk for dyslexia from controls and is a precursor of preliterate language comprehension and reading fluency. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. Intelligence development of pre-lingual deaf children with unilateral cochlear implantation.

    PubMed

    Chen, Mo; Wang, Zhaoyan; Zhang, Zhiwen; Li, Xun; Wu, Weijing; Xie, Dinghua; Xiao, Zi-An

    2016-11-01

    The present study aims to test whether deaf children with unilateral cochlear implantation (CI) have higher intelligence quotients (IQ). We also try to find out the predictive factors of intelligence development in deaf children with CI. Totally, 186 children were enrolled into this study. They were divided into 3 groups: CI group (N = 66), hearing loss group (N = 54) and normal hearing group (N = 66). All children took the Hiskey-Nebraska Test of Learning Aptitude to assess the IQ. After that, we used Deafness gene chip, Categories of Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) methods to evaluate the genotype, auditory and speech performance, respectively. At baseline, the average IQ of hearing loss group (HL), CI group, normal hearing (NH) group were 98.3 ± 9.23, 100.03 ± 12.13 and 109.89 ± 10.56, while NH group scored higher significantly than HL and CI groups (p < 0.05). After 12 months, the average IQ of HL group, CI group, NH group were99.54 ± 9.38,111.85 ± 15.38, and 112.08 ± 8.51, respectively. No significant difference between the IQ of the CI and NH groups was found (p > 0.05). The growth of SIR was positive correlated with the growth of IQ (r = 0.247, p = 0.046), while no significant correlation were found between IQ growth and other possible factors, i.e. gender, age of CI, use of hearing aid, genotype, implant device type, inner ear malformation and CAP growth (p > 0.05). Our study suggests that CI potentially improves the intelligence development in deaf children. Speech performance growth is significantly correlated with IQ growth of CI children. Deaf children accepted CI before 6 years can achieve a satisfying and undifferentiated short-term (12 months) development of intelligence. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Mayo's Older Americans Normative Studies (MOANS): Factor Structure of a Core Battery.

    ERIC Educational Resources Information Center

    Smith, Glenn E.; And Others

    1992-01-01

    Using the Mayo Older Americans Normative Studies (MOANS) group (526 55-to 97-year-old adults), factor models were examined for the Wechsler Adult Intelligence Scale-Revised (WAIS-R); the Wechsler Memory Scale (WMS); and a core battery of the WAIS-R, the WMS, and the Rey Auditory-Verbal Learning Test. (SLD)

  1. The Role of Sensorimotor Impairments in Dyslexia: A Multiple Case Study of Dyslexic Children

    ERIC Educational Resources Information Center

    White, Sarah; Milne, Elizabeth; Rosen, Stuart; Hansen, Peter; Swettenham, John; Frith, Uta; Ramus, Franck

    2006-01-01

    This study attempts to investigate the role of sensorimotor impairments in the reading disability that characterizes dyslexia. Twenty-three children with dyslexia were compared to 22 control children, matched for age and non-verbal intelligence, on tasks assessing literacy as well as phonological, visual, auditory and motor abilities. The dyslexic…

  2. Prenatal Nicotine Exposure Disrupts Infant Neural Markers of Orienting.

    PubMed

    King, Erin; Campbell, Alana; Belger, Aysenil; Grewen, Karen

    2018-06-07

    Prenatal nicotine exposure (PNE) from maternal cigarette smoking is linked to developmental deficits, including impaired auditory processing, language, generalized intelligence, attention, and sleep. Fetal brain undergoes massive growth, organization, and connectivity during gestation, making it particularly vulnerable to neurotoxic insult. Nicotine binds to nicotinic acetylcholine receptors, which are extensively involved in growth, connectivity, and function of developing neural circuitry and neurotransmitter systems. Thus, PNE may have long-term impact on neurobehavioral development. The purpose of this study was to compare the auditory K-complex, an event-related potential reflective of auditory gating, sleep preservation and memory consolidation during sleep, in infants with and without PNE and to relate these neural correlates to neurobehavioral development. We compared brain responses to an auditory paired-click paradigm in 3- to 5-month-old infants during Stage 2 sleep, when the K-complex is best observed. We measured component amplitude and delta activity during the K-complex. Infants with PNE demonstrated significantly smaller amplitude of the N550 component and reduced delta-band power within elicited K-complexes compared to nonexposed infants and also were less likely to orient with a head turn to a novel auditory stimulus (bell ring) when awake. PNE may impair auditory sensory gating, which may contribute to disrupted sleep and to reduced auditory discrimination and learning, attention re-orienting, and/or arousal during wakefulness reported in other studies. Links between PNE and reduced K-complex amplitude and delta power may represent altered cholinergic and GABAergic synaptic programming and possibly reflect early neural bases for PNE-linked disruptions in sleep quality and auditory processing. These may pose significant disadvantage for language acquisition, attention, and social interaction necessary for academic and social success.

  3. Surgical factors in pediatric cochlear implantation and their early effects on electrode activation and functional outcomes.

    PubMed

    Francis, Howard W; Buchman, Craig A; Visaya, Jiovani M; Wang, Nae-Yuh; Zwolan, Teresa A; Fink, Nancy E; Niparko, John K

    2008-06-01

    To assess the impact of surgical factors on electrode status and early communication outcomes in young children in the first 2 years of cochlear implantation. Prospective multicenter cohort study. Six tertiary referral centers. Children 5 years or younger before implantation with normal nonverbal intelligence. Cochlear implant operations in 209 ears of 188 children. Percent active channels, auditory behavior as measured by the Infant Toddler Meaningful Auditory Integration Scale/Meaningful Auditory Integration Scale and Reynell receptive language scores. Stable insertion of the full electrode array was accomplished in 96.2% of ears. At least 75% of electrode channels were active in 88% of ears. Electrode deactivation had a significant negative effect on Infant Toddler Meaningful Auditory Integration Scale/Meaningful Auditory Integration Scale scores at 24 months but no effect on receptive language scores. Significantly fewer active electrodes were associated with a history of meningitis. Surgical complications requiring additional hospitalization and/or revision surgery occurred in 6.7% of patients but had no measurable effect on the development of auditory behavior within the first 2 years. Negative, although insignificant, associations were observed between the need for perioperative revision of the device and 1) the percent of active electrodes and 2) the receptive language level at 2-year follow-up. Activation of the entire electrode array is associated with better early auditory outcomes. Decrements in the number of active electrodes and lower gains of receptive language after manipulation of the newly implanted device were not statistically significant but may be clinically relevant, underscoring the importance of surgical technique and the effective placement of the electrode array.

  4. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  5. What You Don't Notice Can Harm You: Age-Related Differences in Detecting Concurrent Visual, Auditory, and Tactile Cues.

    PubMed

    Pitts, Brandon J; Sarter, Nadine

    2018-06-01

    Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.

  6. A Neural Code That Is Isometric to Vocal Output and Correlates with Its Sensory Consequences

    PubMed Central

    Vyssotski, Alexei L.; Stepien, Anna E.; Keller, Georg B.; Hahnloser, Richard H. R.

    2016-01-01

    What cortical inputs are provided to motor control areas while they drive complex learned behaviors? We study this question in the nucleus interface of the nidopallium (NIf), which is required for normal birdsong production and provides the main source of auditory input to HVC, the driver of adult song. In juvenile and adult zebra finches, we find that spikes in NIf projection neurons precede vocalizations by several tens of milliseconds and are insensitive to distortions of auditory feedback. We identify a local isometry between NIf output and vocalizations: quasi-identical notes produced in different syllables are preceded by highly similar NIf spike patterns. NIf multiunit firing during song precedes responses in auditory cortical neurons by about 50 ms, revealing delayed congruence between NIf spiking and a neural representation of auditory feedback. Our findings suggest that NIf codes for imminent acoustic events within vocal performance. PMID:27723764

  7. Evaluation of the intelligent cruise control system : volume 1 : study results

    DOT National Transportation Integrated Search

    1999-10-01

    The Intelligent Cruise Control (ICC) system evaluation was based on an ICC Field Operational Test (FOT) performed in Michigan. The FOT involved 108 volunteers recruited to drive ten ICC-equipped Chrysler Concordes. Testing was initiated in July 1996 ...

  8. Early Sign Language Exposure and Cochlear Implantation Benefits.

    PubMed

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  9. Intelligence, Attention, and Behavioral Outcomes in Internationally Adopted Girls with a History of Institutionalization.

    PubMed

    Petranovich, Christine L; Walz, Nicolay Chertkoff; Staat, Mary Allen; Chiu, Chung-Yiu Peter; Wade, Shari L

    2015-01-01

    The aim of this study was to investigate the association of neurocognitive functioning with internalizing and externalizing problems and school and social competence in children adopted internationally. Participants included girls between the ages of 6-12 years who were internationally adopted from China (n = 32) or Eastern Europe (n = 25) and a control group of never-adopted girls (n = 25). Children completed the Vocabulary and Matrix Reasoning subtests from the Wechsler Abbreviated Scale of Intelligence and the Score! and Sky Search subtests from the Test of Everyday Attention for Children. Parents completed the Child Behavior Checklist and the Home and Community Social Behavior Scales. Compared to the controls, the Eastern European group evidenced significantly more problems with externalizing behaviors and school and social competence and poorer performance on measures of verbal intelligence, perceptual reasoning, and auditory attention. More internalizing problems were reported in the Chinese group compared to the controls. Using generalized linear regression, interaction terms were examined to determine whether the associations of neurocognitive functioning with behavior varied across groups. Eastern European group status was associated with more externalizing problems and poorer school and social competence, irrespective of neurocognitive test performance. In the Chinese group, poorer auditory attention was associated with more problems with social competence. Neurocognitive functioning may be related to behavior in children adopted internationally. Knowledge about neurocognitive functioning may further our understanding of the impact of early institutionalization on post-adoption behavior.

  10. Disabled readers: their intellectual and perceptual capacities at differing ages.

    PubMed

    Miller, J W; McKenna, M C

    1981-04-01

    To investigate the multiple relationships between selected measures of intelligence and perception and reading achievement a group of young, poor readers (MCA = 8.4 yr.) and a group of older, poor readers (MCA = 11.2 yr.) were given the Gates-MacGinitie Achievement Test, Peabody Picture Vocabulary Test, Slosson Intelligence Test, Spatial Orientation Memory Test, and Auditory Discrimination Test. The combination of the four predictor variables accounted for a significant amount of the variance in reading vocabulary and comprehension for youngest and older poor readers. Greater variance was accounted for in the reading achievement of younger students than of older students. Perceptual abilities related more strongly for younger students, while intelligence related more strongly for older students. Questions are raised about the validity of using expectancy formulae with younger disabled readers and the "learning disabilities" approach with older disabled readers.

  11. What drives the perceptual change resulting from speech motor adaptation? Evaluation of hypotheses in a Bayesian modeling framework

    PubMed Central

    Perrier, Pascal; Schwartz, Jean-Luc; Diard, Julien

    2018-01-01

    Shifts in perceptual boundaries resulting from speech motor learning induced by perturbations of the auditory feedback were taken as evidence for the involvement of motor functions in auditory speech perception. Beyond this general statement, the precise mechanisms underlying this involvement are not yet fully understood. In this paper we propose a quantitative evaluation of some hypotheses concerning the motor and auditory updates that could result from motor learning, in the context of various assumptions about the roles of the auditory and somatosensory pathways in speech perception. This analysis was made possible thanks to the use of a Bayesian model that implements these hypotheses by expressing the relationships between speech production and speech perception in a joint probability distribution. The evaluation focuses on how the hypotheses can (1) predict the location of perceptual boundary shifts once the perturbation has been removed, (2) account for the magnitude of the compensation in presence of the perturbation, and (3) describe the correlation between these two behavioral characteristics. Experimental findings about changes in speech perception following adaptation to auditory feedback perturbations serve as reference. Simulations suggest that they are compatible with a framework in which motor adaptation updates both the auditory-motor internal model and the auditory characterization of the perturbed phoneme, and where perception involves both auditory and somatosensory pathways. PMID:29357357

  12. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia

    PubMed Central

    Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  13. Drivers' misjudgement of vigilance state during prolonged monotonous daytime driving.

    PubMed

    Schmidt, Eike A; Schrauf, Michael; Simon, Michael; Fritzsche, Martin; Buchner, Axel; Kincses, Wilhelm E

    2009-09-01

    To investigate the effects of monotonous daytime driving on vigilance state and particularly the ability to judge this state, a real road driving study was conducted. To objectively assess vigilance state, performance (auditory reaction time) and physiological measures (EEG: alpha spindle rate, P3 amplitude; ECG: heart rate) were recorded continuously. Drivers judged sleepiness, attention to the driving task and monotony retrospectively every 20 min. Results showed that prolonged daytime driving under monotonous conditions leads to a continuous reduction in vigilance. Towards the end of the drive, drivers reported a subjectively improved vigilance state, which was contrary to the continued decrease in vigilance as indicated by all performance and physiological measures. These findings indicate a lack of self-assessment abilities after approximately 3h of continuous monotonous daytime driving.

  14. Online contributions of auditory feedback to neural activity in avian song control circuitry

    PubMed Central

    Sakata, Jon T.; Brainard, Michael S.

    2008-01-01

    Birdsong, like human speech, relies critically on auditory feedback to provide information about the quality of vocalizations. Although the importance of auditory feedback to vocal learning is well established, whether and how feedback signals influence vocal premotor circuitry has remained obscure. Previous studies in singing birds have not detected changes to vocal premotor activity following perturbations of auditory feedback, leading to the hypothesis that contributions of feedback to vocal plasticity might rely on ‘offline’ processing. Here, we recorded single and multi-unit activity in the premotor nucleus HVC of singing Bengalese finches in response to feedback perturbations that are known to drive plastic changes in song. We found that transient feedback perturbation caused reliable decreases in HVC activity at short latencies (20-80 ms). Similar changes to HVC activity occurred in awake, non-singing finches when the bird’s own song was played back with auditory perturbations that simulated those experienced by singing birds. These data indicate that neurons in avian vocal premotor circuitry are rapidly influenced by perturbations of auditory feedback and support the possibility that feedback information in HVC contributes online to the production and plasticity of vocalizations. PMID:18971480

  15. Autonomous and Connected Vehicles: A Law Enforcement Primer

    DTIC Science & Technology

    2015-12-01

    CYBERSECURITY FOR AUTOMOBILES Intelligent Transportation Systems (ITS) that are emerging around the globe achieve that classification based on the convergence...Car Works,” October 18, 2011, IEEE Spectrum, http://spectrum.ieee.org/automaton/robotics/ artificial - intelligence /how-google-self-driving-car-works...whereby artificial intelligence acts on behalf of a human, but carries the same life or death consequences.435 States should encourage and engage in

  16. Autonomous Driver Based on an Intelligent System of Decision-Making.

    PubMed

    Czubenko, Michał; Kowalczuk, Zdzisław; Ordys, Andrew

    The paper presents and discusses a system ( xDriver ) which uses an Intelligent System of Decision-making (ISD) for the task of car driving. The principal subject is the implementation, simulation and testing of the ISD system described earlier in our publications (Kowalczuk and Czubenko in artificial intelligence and soft computing lecture notes in computer science, lecture notes in artificial intelligence, Springer, Berlin, 2010, 2010, In Int J Appl Math Comput Sci 21(4):621-635, 2011, In Pomiary Autom Robot 2(17):60-5, 2013) for the task of autonomous driving. The design of the whole ISD system is a result of a thorough modelling of human psychology based on an extensive literature study. Concepts somehow similar to the ISD system can be found in the literature (Muhlestein in Cognit Comput 5(1):99-105, 2012; Wiggins in Cognit Comput 4(3):306-319, 2012), but there are no reports of a system which would model the human psychology for the purpose of autonomously driving a car. The paper describes assumptions for simulation, the set of needs and reactions (characterizing the ISD system), the road model and the vehicle model, as well as presents some results of simulation. It proves that the xDriver system may behave on the road as a very inexperienced driver.

  17. 77 FR 37004 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-20

    ...-Intelligence Agency (NGA), ATTN: Security Specialist, Mission Support, MSRS P-12, 7500 GEOINT Drive..., Alternate OSD Federal Register Liaison Officer, Department of Defense. NGA-005 System name: National... maintained at National Geospatial-Intelligence Agency (NGA) Headquarters in Washington, DC metro area...

  18. The Performance of Preschoolers with Speech/Language Disorders on the McCarthy Scales of Children's Abilities.

    ERIC Educational Resources Information Center

    Morgan, Robert L.; And Others

    1992-01-01

    Administered McCarthy Scales of Children's Abilities to preschool children of normal intelligence with (n=25) and without (n=25) speech/language disorders. Speech/language disorders group had significantly lower scores on all scales except Motor; showed difficulty in short-term auditory memory skills but not in visual memory skills; and had…

  19. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  20. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  1. Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.

    PubMed

    Williams, Jason A

    2012-06-01

    The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.

  2. Predicting speech intelligibility in noise for hearing-critical jobs

    NASA Astrophysics Data System (ADS)

    Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian

    2003-10-01

    Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.

  3. Driving the brain towards creativity and intelligence: A network control theory analysis.

    PubMed

    Kenett, Yoed N; Medaglia, John D; Beaty, Roger E; Chen, Qunlin; Betzel, Richard F; Thompson-Schill, Sharon L; Qiu, Jiang

    2018-01-04

    High-level cognitive constructs, such as creativity and intelligence, entail complex and multiple processes, including cognitive control processes. Recent neurocognitive research on these constructs highlight the importance of dynamic interaction across neural network systems and the role of cognitive control processes in guiding such a dynamic interaction. How can we quantitatively examine the extent and ways in which cognitive control contributes to creativity and intelligence? To address this question, we apply a computational network control theory (NCT) approach to structural brain imaging data acquired via diffusion tensor imaging in a large sample of participants, to examine how NCT relates to individual differences in distinct measures of creative ability and intelligence. Recent application of this theory at the neural level is built on a model of brain dynamics, which mathematically models patterns of inter-region activity propagated along the structure of an underlying network. The strength of this approach is its ability to characterize the potential role of each brain region in regulating whole-brain network function based on its anatomical fingerprint and a simplified model of node dynamics. We find that intelligence is related to the ability to "drive" the brain system into easy to reach neural states by the right inferior parietal lobe and lower integration abilities in the left retrosplenial cortex. We also find that creativity is related to the ability to "drive" the brain system into difficult to reach states by the right dorsolateral prefrontal cortex (inferior frontal junction) and higher integration abilities in sensorimotor areas. Furthermore, we found that different facets of creativity-fluency, flexibility, and originality-relate to generally similar but not identical network controllability processes. We relate our findings to general theories on intelligence and creativity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Computer-Mediated Assessment of Intelligibility in Aphasia and Apraxia of Speech

    PubMed Central

    Haley, Katarina L.; Roth, Heidi; Grindstaff, Enetta; Jacks, Adam

    2011-01-01

    Background Previous work indicates that single word intelligibility tests developed for dysarthria are sensitive to segmental production errors in aphasic individuals with and without apraxia of speech. However, potential listener learning effects and difficulties adapting elicitation procedures to coexisting language impairments limit their applicability to left hemisphere stroke survivors. Aims The main purpose of this study was to examine basic psychometric properties for a new monosyllabic intelligibility test developed for individuals with aphasia and/or AOS. A related purpose was to examine clinical feasibility and potential to standardize a computer-mediated administration approach. Methods & Procedures A 600-item monosyllabic single word intelligibility test was constructed by assembling sets of phonetically similar words. Custom software was used to select 50 target words from this test in a pseudo-random fashion and to elicit and record production of these words by 23 speakers with aphasia and 20 neurologically healthy participants. To evaluate test-retest reliability, two identical sets of 50-word lists were elicited by requesting repetition after a live speaker model. To examine the effect of a different word set and auditory model, an additional set of 50 different words was elicited with a pre-recorded model. The recorded words were presented to normal-hearing listeners for identification via orthographic and multiple-choice response formats. To examine construct validity, production accuracy for each speaker was estimated via phonetic transcription and rating of overall articulation. Outcomes & Results Recording and listening tasks were completed in less than six minutes for all speakers and listeners. Aphasic speakers were significantly less intelligible than neurologically healthy speakers and displayed a wide range of intelligibility scores. Test-retest and inter-listener reliability estimates were strong. No significant difference was found in scores based on recordings from a live model versus a pre-recorded model, but some individual speakers favored the live model. Intelligibility test scores correlated highly with segmental accuracy derived from broad phonetic transcription of the same speech sample and a motor speech evaluation. Scores correlated moderately with rated articulation difficulty. Conclusions We describe a computerized, single-word intelligibility test that yields clinically feasible, reliable, and valid measures of segmental speech production in adults with aphasia. This tool can be used in clinical research to facilitate appropriate participant selection and to establish matching across comparison groups. For a majority of speakers, elicitation procedures can be standardized by using a pre-recorded auditory model for repetition. This assessment tool has potential utility for both clinical assessment and outcomes research. PMID:22215933

  5. Context-Based Filtering for Assisted Brain-Actuated Wheelchair Driving

    PubMed Central

    Vanacker, Gerolf; Millán, José del R.; Lew, Eileen; Ferrez, Pierre W.; Moles, Ferran Galán; Philips, Johan; Van Brussel, Hendrik; Nuttin, Marnix

    2007-01-01

    Controlling a robotic device by using human brain signals is an interesting and challenging task. The device may be complicated to control and the nonstationary nature of the brain signals provides for a rather unstable input. With the use of intelligent processing algorithms adapted to the task at hand, however, the performance can be increased. This paper introduces a shared control system that helps the subject in driving an intelligent wheelchair with a noninvasive brain interface. The subject's steering intentions are estimated from electroencephalogram (EEG) signals and passed through to the shared control system before being sent to the wheelchair motors. Experimental results show a possibility for significant improvement in the overall driving performance when using the shared control system compared to driving without it. These results have been obtained with 2 healthy subjects during their first day of training with the brain-actuated wheelchair. PMID:18354739

  6. Binding and unbinding the auditory and visual streams in the McGurk effect.

    PubMed

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  7. It's about time: Presentation in honor of Ira Hirsh

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    Over his long and illustrious career, Ira Hirsh has returned time and time again to his interest in the temporal aspects of pattern perception. Although Hirsh has studied and published articles and books pertaining to many aspects of the auditory system, such as sound conduction in the ear, cochlear mechanics, masking, auditory localization, psychoacoustic behavior in animals, speech perception, medical and audiological applications, coupling between psychophysics and physiology, and ecological acoustics, it is his work on auditory timing of simple and complex rhythmic patterns, the backbone of speech and music, that are at the heart of his more recent work. Here, we will focus on several aspects of temporal processing of simple and complex signals, both within and across sensory systems. Data will be reviewed on temporal order judgments of simple tones, and simultaneity judgments and intelligibility of unimodal and bimodal complex stimuli where stimulus components are presented either synchronously or asynchronously. Differences in the symmetry and shape of ``temporal windows'' derived from these data sets will be highlighted.

  8. Cochlear implantation in Waardenburg syndrome: The Indian scenario.

    PubMed

    Deka, Ramesh Chandra; Sikka, Kapil; Chaturvedy, Gaurav; Singh, Chirom Amit; Venkat Karthikeyan, C; Kumar, Rakesh; Agarwal, Shivani

    2010-10-01

    Children with Waardenburg syndrome (WS) exhibiting normal inner ear anatomy, like those included in our cohort, derive significant benefit from cochlear implantation and results are comparable to those reported for the general population of implanted children. The patient population of WS accounts for approximately 2% of congenitally deaf children. The purpose of this retrospective case review was to describe the outcomes for those children with WS who have undergone cochlear implantation. On retrospective chart review, there were four cases with WS who underwent cochlear implantation. These cases were assessed for age at implantation, clinical and radiological features, operative and perioperative course, and performance outcomes. Auditory perception and speech production ability were evaluated using categories of auditory performance (CAP), meaningful auditory integration scales (MAIS), and speech intelligibility rating (SIR) during the follow-up period. In this group of children with WS, with a minimum follow-up of 12 months, the CAP score ranged from 3 to 5, MAIS from 25 to 30, and SIR was 3. These scores are comparable with those of other cochlear implantees.

  9. Extraordinary intelligence and the care of infants

    PubMed Central

    Piantadosi, Steven T.; Kidd, Celeste

    2016-01-01

    We present evidence that pressures for early childcare may have been one of the driving factors of human evolution. We show through an evolutionary model that runaway selection for high intelligence may occur when (i) altricial neonates require intelligent parents, (ii) intelligent parents must have large brains, and (iii) large brains necessitate having even more altricial offspring. We test a prediction of this account by showing across primate genera that the helplessness of infants is a particularly strong predictor of the adults’ intelligence. We discuss related implications, including this account’s ability to explain why human-level intelligence evolved specifically in mammals. This theory complements prior hypotheses that link human intelligence to social reasoning and reproductive pressures and explains how human intelligence may have become so distinctive compared with our closest evolutionary relatives. PMID:27217560

  10. Frequency of target crashes for IntelliDrive safety systems

    DOT National Transportation Integrated Search

    2010-10-01

    This report estimates the frequency of different crash types that would potentially be addressed by various categories of Intelligent Transportation Systems as part of the IntelliDriveSM safety systems program. Crash types include light-vehicle crash...

  11. Drive-By-Wire Technology

    DTIC Science & Technology

    2001-05-29

    Symposium Intelligent Systems for the Objective Fleet uTransmission controls uSteering (both on-transmission and under-carriage) uBraking (service and...parking) uTransmission select uThrottle uOther Electromechanical Opportunities uTurret drives (elevation, traverse) uAutomatic propellant handling systems

  12. Are the Soviets Talking about Tactical Intelligence in Their Open-Source Publications?

    DTIC Science & Technology

    1981-06-01

    tactical intelligence. The articles surveyed demonstrate that the Soviets have chosen ground reconnaissance as the basis for their tactical...overruns the enemy, drives six to seven km./deeper and forces the enemy to deploy his reserves. This information is combined’with the personal ...collecting (management) is performed. The articles surveyed deal with the regimental intelligence officer. His functions are listed; the importance of

  13. The interaction of acoustic and linguistic grouping cues in auditory object formation

    NASA Astrophysics Data System (ADS)

    Shapley, Kathy; Carrell, Thomas

    2005-09-01

    One of the earliest explanations for good speech intelligibility in poor listening situations was context [Miller et al., J. Exp. Psychol. 41 (1951)]. Context presumably allows listeners to group and predict speech appropriately and is known as a top-down listening strategy. Amplitude comodulation is another mechanism that has been shown to improve sentence intelligibility. Amplitude comodulation provides acoustic grouping information without changing the linguistic content of the desired signal [Carrell and Opie, Percept. Psychophys. 52 (1992); Hu and Wang, Proceedings of ICASSP-02 (2002)] and is considered a bottom-up process. The present experiment investigated how amplitude comodulation and semantic information combined to improve speech intelligibility. Sentences with high- and low-predictability word sequences [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84 (1988)] were constructed in two different formats: time-varying sinusoidal sentences (TVS) and reduced-channel sentences (RC). The stimuli were chosen because they minimally represent the traditionally defined speech cues and therefore emphasized the importance of the high-level context effects and low-level acoustic grouping cues. Results indicated that semantic information did not influence intelligibility levels of TVS and RC sentences. In addition amplitude modulation aided listeners' intelligibility scores in the TVS condition but hindered listeners' intelligibility scores in the RC condition.

  14. Adult Plasticity in the Subcortical Auditory Pathway of the Maternal Mouse

    PubMed Central

    Miranda, Jason A.; Shepard, Kathryn N.; McClintock, Shannon K.; Liu, Robert C.

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system – motherhood – is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered. PMID:24992362

  15. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    PubMed

    Miranda, Jason A; Shepard, Kathryn N; McClintock, Shannon K; Liu, Robert C

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  16. Effectiveness and acceptance of the intelligent speeding prediction system (ISPS).

    PubMed

    Zhao, Guozhen; Wu, Changxu

    2013-03-01

    The intelligent speeding prediction system (ISPS) is an in-vehicle speed assistance system developed to provide quantitative predictions of speeding. Although the ISPS's prediction of speeding has been validated, whether the ISPS can regulate a driver's speed behavior or whether a driver accepts the ISPS needs further investigation. Additionally, compared to the existing intelligent speed adaptation (ISA) system, whether the ISPS performs better in terms of reducing excessive speeds and improving driving safety needs more direct evidence. An experiment was conducted to assess and compare the effectiveness and acceptance of the ISPS and the ISA. We conducted a driving simulator study with 40 participants. System type served as a between-subjects variable with four levels: no speed assistance system, pre-warning system developed based on the ISPS, post-warning system ISA, and combined pre-warning and ISA system. Speeding criterion served as a within-subjects variable with two levels: lower (posted speed limit plus 1 mph) and higher (posted speed limit plus 5 mph) speed threshold. Several aspects of the participants' driving speed, speeding measures, lead vehicle response, and subjective measures were collected. Both pre-warning and combined systems led to greater minimum time-to-collision. The combined system resulted in slower driving speed, fewer speeding exceedances, shorter speeding duration, and smaller speeding magnitude. The results indicate that both pre-warning and combined systems have the potential to improve driving safety and performance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Wearable real-time and adaptive feedback device to face the stuttering: a knowledge-based telehealthcare proposal.

    PubMed

    Prado, Manuel; Roa, Laura M

    2007-01-01

    Despite first written references to permanent developmental stuttering occurred more than 2500 years ago, the mechanisms underlying this disorder are still unknown. This paper briefly reviews stuttering causal hypothesis and treatments, and presents the requirements that a new stuttering therapeutic device should verify. As a result of the analysis, an adaptive altered auditory feedback device based on a multimodal intelligent monitor, within the framework of a knowledge-based telehealthcare system, is presented. The subsequent discussion, based partly on the successful outcomes of a similar intelligent monitor, suggests that this novel device is feasible and could help to fill the gap between research and clinic.

  18. Acoustical Awareness for Intelligent Robotic Action

    DTIC Science & Technology

    2007-12-01

    sound is desired or needed for some other purposes, but is interfering with the intended application, it is called noise. The Soundscape refers...to that which can be heard. Although often used interchangeably with the term Auditory Scene, the soundscape is a narrower definition, referring...difficult is the underlying complexity of the acoustical domain. The soundscape is always changing with time, more so than even the visual domain tends

  19. Depending on Data: Business Intelligence Systems Drive Reform

    ERIC Educational Resources Information Center

    Halligan, Tom

    2010-01-01

    As more community colleges focus on using data to improve educational outcomes, many administrators are considering business intelligence applications that promise a path toward more informed decisions. Getting there, leaders say, requires more than installing some out-of-the-box solution; it requires changing the culture and finding skilled…

  20. United Kingdom national paediatric bilateral project: Results of professional rating scales and parent questionnaires.

    PubMed

    Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K

    2017-01-01

    This fourteen-centre project used professional rating scales and parent questionnaires to assess longitudinal outcomes in a large non-selected population of children receiving simultaneous and sequential bilateral cochlear implants. This was an observational non-randomized service evaluation. Data were collected at four time points: before bilateral cochlear implants or before the sequential implant, one year, two years, and three years after. The measures reported are Categories of Auditory Performance II (CAPII), Speech Intelligibility Rating (SIR), Bilateral Listening Skills Profile (BLSP) and Parent Outcome Profile (POP). Thousand and one children aged from 8 months to almost 18 years were involved, although there were many missing data. In children receiving simultaneous implants after one, two, and three years respectively, median CAP scores were 4, 5, and 6; median SIR were 1, 2, and 3. Three years after receiving simultaneous bilateral cochlear implants, 61% of children were reported to understand conversation without lip-reading and 66% had intelligible speech if the listener concentrated hard. Auditory performance and speech intelligibility were significantly better in female children than males. Parents of children using sequential implants were generally positive about their child's well-being and behaviour since receiving the second device; those who were less positive about well-being changes also generally reported their children less willing to wear the second device. Data from 78% of paediatric cochlear implant centres in the United Kingdom provide a real-world picture of outcomes of children with bilateral implants in the UK. This large reference data set can be used to identify children in the lower quartile for targeted intervention.

  1. The effect of the inner-hair-cell mediated transduction on the shape of neural tuning curves

    NASA Astrophysics Data System (ADS)

    Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah

    2018-05-01

    The inner hair cells of the mammalian cochlea transform the vibrations of their stereocilia into releases of neurotransmitter at the ribbon synapses, thereby controlling the activity of the afferent auditory fibers. The mechanical-to-neural transduction is a highly nonlinear process and it introduces differences between the frequency-tuning of the stereocilia and that of the afferent fibers. Using a computational model of the inner hair cell that is based on in vitro data, we estimated that smaller vibrations of the stereocilia are necessary to drive the afferent fibers above threshold at low (≤0.5 kHz) than at high (≥4 kHz) driving frequencies. In the base of the cochlea, the transduction process affects the low-frequency tails of neural tuning curves. In particular, it introduces differences between the frequency-tuning of the stereocilia and that of the auditory fibers resembling those between basilar membrane velocity and auditory fibers tuning curves in the chinchilla base. For units with a characteristic frequency between 1 and 4 kHz, the transduction process yields shallower neural than stereocilia tuning curves as the characteristic frequency decreases. This study proposes that transduction contributes to the progressive broadening of neural tuning curves from the base to the apex.

  2. Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha

    PubMed Central

    Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim

    2015-01-01

    The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms. PMID:26538641

  3. Driver memory for in-vehicle visual and auditory messages

    DOT National Transportation Integrated Search

    1999-12-01

    Three experiments were conducted in a driving simulator to evaluate effects of in-vehicle message modality and message format on comprehension and memory for younger and older drivers. Visual icons and text messages were effective in terms of high co...

  4. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  5. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  6. Effects of alcohol on attention orienting and dual-task performance during simulated driving: an event-related potential study.

    PubMed

    Wester, Anne E; Verster, Joris C; Volkerts, Edmund R; Böcker, Koen B E; Kenemans, J Leon

    2010-09-01

    Driving is a complex task and is susceptible to inattention and distraction. Moreover, alcohol has a detrimental effect on driving performance, possibly due to alcohol-induced attention deficits. The aim of the present study was to assess the effects of alcohol on simulated driving performance and attention orienting and allocation, as assessed by event-related potentials (ERPs). Thirty-two participants completed two test runs in the Divided Attention Steering Simulator (DASS) with blood alcohol concentrations (BACs) of 0.00%, 0.02%, 0.05%, 0.08% and 0.10%. Sixteen participants performed the second DASS test run with a passive auditory oddball to assess alcohol effects on involuntary attention shifting. Sixteen other participants performed the second DASS test run with an active auditory oddball to assess alcohol effects on dual-task performance and active attention allocation. Dose-dependent impairments were found for reaction times, the number of misses and steering error, even more so in dual-task conditions, especially in the active oddball group. ERP amplitudes to novel irrelevant events were also attenuated in a dose-dependent manner. The P3b amplitude to deviant target stimuli decreased with blood alcohol concentration only in the dual-task condition. It is concluded that alcohol increases distractibility and interference from secondary task stimuli, as well as reduces attentional capacity and dual-task integrality.

  7. Mid-sized omnidirectional robot with hydraulic drive and steering

    NASA Astrophysics Data System (ADS)

    Wood, Carl G.; Perry, Trent; Cook, Douglas; Maxfield, Russell; Davidson, Morgan E.

    2003-09-01

    Through funding from the US Army-Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program, Utah State University's (USU) Center for Self-Organizing and Intelligent Systems (CSOIS) has developed the T-series of omni-directional robots based on the USU omni-directional vehicle (ODV) technology. The ODV provides independent computer control of steering and drive in a single wheel assembly. By putting multiple omni-directional (OD) wheels on a chassis, a vehicle is capable of uncoupled translational and rotational motion. Previous robots in the series, the T1, T2, T3, ODIS, ODIS-T, and ODIS-S have all used OD wheels based on electric motors. The T4 weighs approximately 1400 lbs and features a 4-wheel drive wheel configuration. Each wheel assembly consists of a hydraulic drive motor and a hydraulic steering motor. A gasoline engine is used to power both the hydraulic and electrical systems. The paper presents an overview of the mechanical design of the vehicle as well as potential uses of this technology in fielded systems.

  8. Dyslexia risk gene relates to representation of sound in the auditory brainstem.

    PubMed

    Neef, Nicole E; Müller, Bent; Liebig, Johanna; Schaadt, Gesa; Grigutsch, Maren; Gunter, Thomas C; Wilcke, Arndt; Kirsten, Holger; Skeide, Michael A; Kraft, Indra; Kraus, Nina; Emmrich, Frank; Brauer, Jens; Boltze, Johannes; Friederici, Angela D

    2017-04-01

    Dyslexia is a reading disorder with strong associations with KIAA0319 and DCDC2. Both genes play a functional role in spike time precision of neurons. Strikingly, poor readers show an imprecise encoding of fast transients of speech in the auditory brainstem. Whether dyslexia risk genes are related to the quality of sound encoding in the auditory brainstem remains to be investigated. Here, we quantified the response consistency of speech-evoked brainstem responses to the acoustically presented syllable [da] in 159 genotyped, literate and preliterate children. When controlling for age, sex, familial risk and intelligence, partial correlation analyses associated a higher dyslexia risk loading with KIAA0319 with noisier responses. In contrast, a higher risk loading with DCDC2 was associated with a trend towards more stable responses. These results suggest that unstable representation of sound, and thus, reduced neural discrimination ability of stop consonants, occurred in genotypes carrying a higher amount of KIAA0319 risk alleles. Current data provide the first evidence that the dyslexia-associated gene KIAA0319 can alter brainstem responses and impair phoneme processing in the auditory brainstem. This brain-gene relationship provides insight into the complex relationships between phenotype and genotype thereby improving the understanding of the dyslexia-inherent complex multifactorial condition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Reduced Attention Allocation during Short Periods of Partially Automated Driving: An Event-Related Potentials Study

    PubMed Central

    Solís-Marcos, Ignacio; Galvao-Carmona, Alejandro; Kircher, Katja

    2017-01-01

    Research on partially automated driving has revealed relevant problems with driving performance, particularly when drivers’ intervention is required (e.g., take-over when automation fails). Mental fatigue has commonly been proposed to explain these effects after prolonged automated drives. However, performance problems have also been reported after just a few minutes of automated driving, indicating that other factors may also be involved. We hypothesize that, besides mental fatigue, an underload effect of partial automation may also affect driver attention. In this study, such potential effect was investigated during short periods of partially automated and manual driving and at different speeds. Subjective measures of mental demand and vigilance and performance to a secondary task (an auditory oddball task) were used to assess driver attention. Additionally, modulations of some specific attention-related event-related potentials (ERPs, N1 and P3 components) were investigated. The mental fatigue effects associated with the time on task were also evaluated by using the same measurements. Twenty participants drove in a fixed-base simulator while performing an auditory oddball task that elicited the ERPs. Six conditions were presented (5–6 min each) combining three speed levels (low, comfortable and high) and two automation levels (manual and partially automated). The results showed that, when driving partially automated, scores in subjective mental demand and P3 amplitudes were lower than in the manual conditions. Similarly, P3 amplitude and self-reported vigilance levels decreased with the time on task. Based on previous studies, these findings might reflect a reduction in drivers’ attention resource allocation, presumably due to the underload effects of partial automation and to the mental fatigue associated with the time on task. Particularly, such underload effects on attention could explain the performance decrements after short periods of automated driving reported in other studies. However, further studies are needed to investigate this relationship in partial automation and in other automation levels. PMID:29163112

  10. The Information Barber Pole: Integrating White Information and Red Intelligence in Emerging Conflicts

    DTIC Science & Technology

    2013-12-01

    pollenate information. The effect of this data should be a comprehension that is both geospatial and temporal in nature and can “depict the evolution...information or intelligence that drive them, and typically don’t cross- pollenate , nor are they given incentive to do so without the expressed desire

  11. Using on-line altered auditory feedback treating Parkinsonian speech

    NASA Astrophysics Data System (ADS)

    Wang, Emily; Verhagen, Leo; de Vries, Meinou H.

    2005-09-01

    Patients with advanced Parkinson's disease tend to have dysarthric speech that is hesitant, accelerated, and repetitive, and that is often resistant to behavior speech therapy. In this pilot study, the speech disturbances were treated using on-line altered feedbacks (AF) provided by SpeechEasy (SE), an in-the-ear device registered with the FDA for use in humans to treat chronic stuttering. Eight PD patients participated in the study. All had moderate to severe speech disturbances. In addition, two patients had moderate recurring stuttering at the onset of PD after long remission since adolescence, two had bilateral STN DBS, and two bilateral pallidal DBS. An effective combination of delayed auditory feedback and frequency-altered feedback was selected for each subject and provided via SE worn in one ear. All subjects produced speech samples (structured-monologue and reading) under three conditions: baseline, with SE without, and with feedbacks. The speech samples were randomly presented and rated for speech intelligibility goodness using UPDRS-III item 18 and the speaking rate. The results indicted that SpeechEasy is well tolerated and AF can improve speech intelligibility in spontaneous speech. Further investigational use of this device for treating speech disorders in PD is warranted [Work partially supported by Janus Dev. Group, Inc.].

  12. Listenmee and Listenmee smartphone application: synchronizing walking to rhythmic auditory cues to improve gait in Parkinson's disease.

    PubMed

    Lopez, William Omar Contreras; Higuera, Carlos Andres Escalante; Fonoff, Erich Talamoni; Souza, Carolina de Oliveira; Albicker, Ulrich; Martinez, Jairo Alberto Espinoza

    2014-10-01

    Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Medical intelligence, security and global health: the foundations of a new health agenda.

    PubMed

    Bowsher, G; Milner, C; Sullivan, R

    2016-07-01

    Medical intelligence, security and global health are distinct fields that often overlap, especially as the drive towards a global health security agenda gathers pace. Here, we outline some of the ways in which this has happened in the recent past during the recent Ebola epidemic in West Africa and in the killing of Osama Bin laden by US intelligence services. We evaluate medical intelligence and the role it can play in global health security; we also attempt to define a framework that illustrates how medical intelligence can be incorporated into foreign policy action in order delineate the boundaries and scope of this growing field. © The Royal Society of Medicine.

  14. Connectivity-enhanced route selection and adaptive control for the Chevrolet Volt

    DOE PAGES

    Gonder, Jeffrey; Wood, Eric; Rajagopalan, Sai

    2016-01-01

    The National Renewable Energy Laboratory and General Motors evaluated connectivity-enabled efficiency enhancements for the Chevrolet Volt. A high-level model was developed to predict vehicle fuel and electricity consumption based on driving characteristics and vehicle state inputs. These techniques were leveraged to optimize energy efficiency via green routing and intelligent control mode scheduling, which were evaluated using prospective driving routes between tens of thousands of real-world origin/destination pairs. The overall energy savings potential of green routing and intelligent mode scheduling was estimated at 5% and 3%, respectively. Furthermore, these represent substantial opportunities considering that they only require software adjustments to implement.

  15. Automated feedback to foster safe driving in young drivers: phase 2 : traffic tech.

    DOT National Transportation Integrated Search

    2015-12-01

    Intelligent Speed Adaptation (ISA) provides a promising approach to reduce speeding. A core principle of ISA is real-time feedback that lets drivers know when they are driving over the speed limit. The overall goal of the study was to provide insight...

  16. Microscopic Car Modeling for Intelligent Traffic and Scenario Generation in the UCF Driving Simulator : Year 2

    DOT National Transportation Integrated Search

    2000-01-01

    A multi-year project was initiated to introduce autonomous vehicles in the University of Central Florida (UCF) Driving Simulator for real-time interaction with the simulator vehicle. This report describes the progress during the second year. In the f...

  17. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.

  18. Working memory, short-term memory and reading proficiency in school-age children with cochlear implants.

    PubMed

    Bharadwaj, Sneha V; Maricle, Denise; Green, Laura; Allman, Tamby

    2015-10-01

    The objective of the study was to examine short-term memory and working memory through both visual and auditory tasks in school-age children with cochlear implants. The relationship between the performance on these cognitive skills and reading as well as language outcomes were examined in these children. Ten children between the ages of 7 and 11 years with early-onset bilateral severe-profound hearing loss participated in the study. Auditory and visual short-term memory, auditory and visual working memory subtests and verbal knowledge measures were assessed using the Woodcock Johnson III Tests of Cognitive Abilities, the Wechsler Intelligence Scale for Children-IV Integrated and the Kaufman Assessment Battery for Children II. Reading outcomes were assessed using the Woodcock Reading Mastery Test III. Performance on visual short-term memory and visual working memory measures in children with cochlear implants was within the average range when compared to the normative mean. However, auditory short-term memory and auditory working memory measures were below average when compared to the normative mean. Performance was also below average on all verbal knowledge measures. Regarding reading outcomes, children with cochlear implants scored below average for listening and passage comprehension tasks and these measures were positively correlated to visual short-term memory, visual working memory and auditory short-term memory. Performance on auditory working memory subtests was not related to reading or language outcomes. The children with cochlear implants in this study demonstrated better performance in visual (spatial) working memory and short-term memory skills than in auditory working memory and auditory short-term memory skills. Significant positive relationships were found between visual working memory and reading outcomes. The results of the study provide support for the idea that WM capacity is modality specific in children with hearing loss. Based on these findings, reading instruction that capitalizes on the strengths in visual short-term memory and working memory is suggested for young children with early-onset hearing loss. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. The Relationship Between Speech Production and Speech Perception Deficits in Parkinson's Disease.

    PubMed

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-10-01

    This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified through a standardized speech intelligibility assessment, acoustic analysis, and speech intensity measurements. Second, an overall estimation task and an intensity estimation task were addressed to evaluate overall speech perception and speech intensity perception, respectively. Finally, correlation analysis was performed between the speech characteristics of the overall estimation task and the corresponding acoustic analysis. The interaction between speech production and speech intensity perception was investigated by an intensity imitation task. Acoustic analysis and speech intensity measurements demonstrated significant differences in speech production between patients with PD and the HCs. A different pattern in the auditory perception of speech and speech intensity was found in the PD group. Auditory perceptual deficits may influence speech production in patients with PD. The present results suggest a disturbed auditory perception related to an automatic monitoring deficit in PD.

  20. 11th Annual Intelligent Ground Vehicle Competition: team approaches to intelligent driving and machine vision

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.; Lane, Gerald R.

    2003-10-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990's. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Both the U.S. and international teams focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligtent driving capabilities. Over the past 11 years, the competition has challenged both undergraduates and graduates, including Ph.D. students with real world applications in intelligent transportation systems, the military, and manufacturing automation. To date, teams from over 40 universities and colleges have participated. In this paper, we describe some of the applications of the technologies required by this competition, and discuss the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.

  1. The Hospital of the Future. Megatrends, Driving Forces, Barriers to Implementation, Overarching Perspectives, Major Trends into the Future, Implications for TATRC And Specific Recommendations for Action

    DTIC Science & Technology

    2008-10-01

    Healthcare Systems Will Be Those That Work With Data/Info In New Ways • Artificial Intelligence Will Come to the Fore o Effectively Acquire...Education • Artificial Intelligence Will Assist in o History and Physical Examination o Imaging Selection via algorithms o Test Selection via algorithms...medical language into a simulation model based upon artificial intelligence , and • the content verification and validation of the cognitive

  2. Design of vehicle intelligent anti-collision warning system

    NASA Astrophysics Data System (ADS)

    Xu, Yangyang; Wang, Ying

    2018-05-01

    This paper mainly designs a low cost, high-accuracy, micro-miniaturization, and digital display and acousto-optic alarm features of the vehicle intelligent anti-collision warning system that based on MCU AT89C51. The vehicle intelligent anti-collision warning system includes forward anti-collision warning system, auto parking systems and reversing anti-collision radar system. It mainly develops on the basis of ultrasonic distance measurement, its performance is reliable, thus the driving safety is greatly improved and the parking security and efficiency enhance enormously.

  3. The Pathways for Intelligible Speech: Multivariate and Univariate Perspectives

    PubMed Central

    Evans, S.; Kyong, J.S.; Rosen, S.; Golestani, N.; Warren, J.E.; McGettigan, C.; Mourão-Miranda, J.; Wise, R.J.S.; Scott, S.K.

    2014-01-01

    An anterior pathway, concerned with extracting meaning from sound, has been identified in nonhuman primates. An analogous pathway has been suggested in humans, but controversy exists concerning the degree of lateralization and the precise location where responses to intelligible speech emerge. We have demonstrated that the left anterior superior temporal sulcus (STS) responds preferentially to intelligible speech (Scott SK, Blank CC, Rosen S, Wise RJS. 2000. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 123:2400–2406.). A functional magnetic resonance imaging study in Cerebral Cortex used equivalent stimuli and univariate and multivariate analyses to argue for the greater importance of bilateral posterior when compared with the left anterior STS in responding to intelligible speech (Okada K, Rong F, Venezia J, Matchin W, Hsieh IH, Saberi K, Serences JT,Hickok G. 2010. Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. 20: 2486–2495.). Here, we also replicate our original study, demonstrating that the left anterior STS exhibits the strongest univariate response and, in decoding using the bilateral temporal cortex, contains the most informative voxels showing an increased response to intelligible speech. In contrast, in classifications using local “searchlights” and a whole brain analysis, we find greater classification accuracy in posterior rather than anterior temporal regions. Thus, we show that the precise nature of the multivariate analysis used will emphasize different response profiles associated with complex sound to speech processing. PMID:23585519

  4. Cochlear implantation in children with auditory neuropathy spectrum disorder: A multicenter study on auditory performance and speech production outcomes.

    PubMed

    Daneshi, Ahmad; Mirsalehi, Marjan; Hashemi, Seyed Basir; Ajalloueyan, Mohammad; Rajati, Mohsen; Ghasemi, Mohammad Mahdi; Emamdjomeh, Hesamaldin; Asghari, Alimohamad; Mohammadi, Shabahang; Mohseni, Mohammad; Mohebbi, Saleh; Farhadi, Mohammad

    2018-05-01

    To evaluate the auditory performance and speech production outcome in children with auditory neuropathy spectrum disorder (ANSD). The effect of age on the outcomes of the surgery at the time of implantation was also evaluated. Cochlear implantation was performed in 136 children with bilateral severe-to- profound hearing loss due to ANSD, at four tertiary academic centers. The patients were divided into two groups based on the age at the time of implantation; Group I: Children ≤24 months, and Group II: subjects >24 months. The categories of auditory performance (CAP) and speech intelligibility rating (SIR) scores were evaluated after the first and second years of implantation. The differences between the CAP and SIR scores in the two groups were assessed. The median CAP scores improved significantly after the cochlear implantation in all the patients (p value < 0.001). The improvement in the CAP scores during the first year in Group II was greater than Group I (p value: 0.007), but the improvement in CAP scores tended to be significantly higher in patients who were implanted at ≤24 months (p value < 0.001). There was no significant difference between two groups in SIR scores at first-year and second-year follow-ups. The evaluation of the SIR improvement revealed significantly higher values for Group I during the second-year follow-up (p value: 0.003). The auditory performance and speech production skills of the children with ANSD improved significantly after cochlear implantation, and this improvement was affected by age at the time of implantation. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Effects of Impulsive Pile-Driving Exposure on Fishes.

    PubMed

    Casper, Brandon M; Carlson, Thomas J; Halvorsen, Michele B; Popper, Arthur N

    2016-01-01

    Six species of fishes were tested under aquatic far-field, plane-wave acoustic conditions to answer several key questions regarding the effects of exposure to impulsive pile driving. The issues addressed included which sound levels lead to the onset of barotrauma injuries, how these levels differ between fishes with different types of swim bladders, the recovery from barotrauma injuries, and the potential effects exposure might have on the auditory system. The results demonstrate that the current interim criteria for pile-driving sound exposures are 20 dB or more below the actual sound levels that result in the onset of physiological effects on fishes.

  7. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    PubMed Central

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  8. Understanding the neurophysiological basis of auditory abilities for social communication: a perspective on the value of ethological paradigms.

    PubMed

    Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C

    2013-11-01

    Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  9. The function of BDNF in the adult auditory system.

    PubMed

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  11. Frequency locking in auditory hair cells: Distinguishing between additive and parametric forcing

    NASA Astrophysics Data System (ADS)

    Edri, Yuval; Bozovic, Dolores; Yochelis, Arik

    2016-10-01

    The auditory system displays remarkable sensitivity and frequency discrimination, attributes shown to rely on an amplification process that involves a mechanical as well as a biochemical response. Models that display proximity to an oscillatory onset (also known as Hopf bifurcation) exhibit a resonant response to distinct frequencies of incoming sound, and can explain many features of the amplification phenomenology. To understand the dynamics of this resonance, frequency locking is examined in a system near the Hopf bifurcation and subject to two types of driving forces: additive and parametric. Derivation of a universal amplitude equation that contains both forcing terms enables a study of their relative impact on the hair cell response. In the parametric case, although the resonant solutions are 1 : 1 frequency locked, they show the coexistence of solutions obeying a phase shift of π, a feature typical of the 2 : 1 resonance. Different characteristics are predicted for the transition from unlocked to locked solutions, leading to smooth or abrupt dynamics in response to different types of forcing. The theoretical framework provides a more realistic model of the auditory system, which incorporates a direct modulation of the internal control parameter by an applied drive. The results presented here can be generalized to many other media, including Faraday waves, chemical reactions, and elastically driven cardiomyocytes, which are known to exhibit resonant behavior.

  12. Virtual Acoustics, Aeronautics and Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An optimal approach to auditory display design for commercial aircraft would utilize both spatialized ("3-D") audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss amongst pilots is also considered.

  13. Virtual acoustics, aeronautics, and communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Wenzel, E. M. (Principal Investigator)

    1998-01-01

    An optimal approach to auditory display design for commercial aircraft would utilize both spatialized (3-D) audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss among pilots is also considered.

  14. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  15. Biologically inspired computation and learning in Sensorimotor Systems

    NASA Astrophysics Data System (ADS)

    Lee, Daniel D.; Seung, H. S.

    2001-11-01

    Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.

  16. Network and external perturbation induce burst synchronisation in cat cerebral cortex

    NASA Astrophysics Data System (ADS)

    Lameu, Ewandson L.; Borges, Fernando S.; Borges, Rafael R.; Batista, Antonio M.; Baptista, Murilo S.; Viana, Ricardo L.

    2016-05-01

    The brain of mammals are divided into different cortical areas that are anatomically connected forming larger networks which perform cognitive tasks. The cat cerebral cortex is composed of 65 areas organised into the visual, auditory, somatosensory-motor and frontolimbic cognitive regions. We have built a network of networks, in which networks are connected among themselves according to the connections observed in the cat cortical areas aiming to study how inputs drive the synchronous behaviour in this cat brain-like network. We show that without external perturbations it is possible to observe high level of bursting synchronisation between neurons within almost all areas, except for the auditory area. Bursting synchronisation appears between neurons in the auditory region when an external perturbation is applied in another cognitive area. This is a clear evidence that burst synchronisation and collective behaviour in the brain might be a process mediated by other brain areas under stimulation.

  17. Effect of dual task activity on reaction time in males and females.

    PubMed

    Kaur, Manjinder; Nagpal, Sangeeta; Singh, Harpreet; Suhalka, M L

    2014-01-01

    The present study was designed to compare the auditory and visual reaction time on an Audiovisual Reaction Time Machine with the concomitant use of mobile phones in 52 women and 30 men in the age group of 18-40 years. Males showed significantly (p < 0.05) shorter reaction times, both auditory and visual, than females both during single task and multi task performance. But the percentage increase from their respective baseline auditory reaction times, was more in men than women during multitasking, in hand held (24.38% & 18.70% respectively) and hands free modes (36.40% & 18.40% respectively) of the use of cell phone. VRT increased non significantly during multitasking in both the groups. However, the multitasking per se has detrimental effect on the reaction times in both the groups studied. Hence, it should best be avoided in crucial and high attention demanding tasks like driving.

  18. Advanced Traveler Information Systems and Commercial Vehicle Operations Components of the Intelligent Transportation Systems: Head-up Displays and Driver Attention for Navigation Information

    DOT National Transportation Integrated Search

    1998-03-01

    Since the initial development of prototype automotive head-up displays (HUDs), there has been a concern that the presence of the HUD image may interfere with the driving task and negatively impact driving performance. The overall goal of this experim...

  19. Thinking positively: The genetics of high intelligence

    PubMed Central

    Shakeshaft, Nicholas G.; Trzaskowski, Maciej; McMillan, Andrew; Krapohl, Eva; Simpson, Michael A.; Reichenberg, Avi; Cederlöf, Martin; Larsson, Henrik; Lichtenstein, Paul; Plomin, Robert

    2015-01-01

    High intelligence (general cognitive ability) is fundamental to the human capital that drives societies in the information age. Understanding the origins of this intellectual capital is important for government policy, for neuroscience, and for genetics. For genetics, a key question is whether the genetic causes of high intelligence are qualitatively or quantitatively different from the normal distribution of intelligence. We report results from a sibling and twin study of high intelligence and its links with the normal distribution. We identified 360,000 sibling pairs and 9000 twin pairs from 3 million 18-year-old males with cognitive assessments administered as part of conscription to military service in Sweden between 1968 and 2010. We found that high intelligence is familial, heritable, and caused by the same genetic and environmental factors responsible for the normal distribution of intelligence. High intelligence is a good candidate for “positive genetics” — going beyond the negative effects of DNA sequence variation on disease and disorders to consider the positive end of the distribution of genetic effects. PMID:25593376

  20. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder

    PubMed Central

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-01

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042

  1. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder.

    PubMed

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-26

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD.

  2. Examining neural plasticity and cognitive benefit through the unique lens of musical training.

    PubMed

    Moreno, Sylvain; Bidelman, Gavin M

    2014-02-01

    Training programs aimed to alleviate or improve auditory-cognitive abilities have either experienced mixed success or remain to be fully validated. The limited benefits of such regimens are largely attributable to our weak understanding of (i) how (and which) interventions provide the most robust and long lasting improvements to cognitive and perceptual abilities and (ii) how the neural mechanisms which underlie such abilities are positively modified by certain activities and experience. Recent studies indicate that music training provides robust, long-lasting biological benefits to auditory function. Importantly, the behavioral advantages conferred by musical experience extend beyond simple enhancements to perceptual abilities and even impact non-auditory functions necessary for higher-order aspects of cognition (e.g., working memory, intelligence). Collectively, preliminary findings indicate that alternative forms of arts engagement (e.g., visual arts training) may not yield such widespread enhancements, suggesting that music expertise uniquely taps and refines a hierarchy of brain networks subserving a variety of auditory as well as domain-general cognitive mechanisms. We infer that transfer from specific music experience to broad cognitive benefit might be mediated by the degree to which a listener's musical training tunes lower- (e.g., perceptual) and higher-order executive functions, and the coordination between these processes. Ultimately, understanding the broad impact of music on the brain will not only provide a more holistic picture of auditory processing and plasticity, but may help inform and tailor remediation and training programs designed to improve perceptual and cognitive benefits in human listeners. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Desired clearance around a vehicle while parking or performing low speed maneuvers.

    DOT National Transportation Integrated Search

    2004-10-01

    This experiment examined how close to objects (such as a wall or another vehicle) people would drive when parking. The findings will to be used as a basis for visual and/or auditory warnings provided by parking assistance systems. A total of 16 peopl...

  4. Tone Language Speakers and Musicians Share Enhanced Perceptual and Cognitive Abilities for Musical Pitch: Evidence for Bidirectionality between the Domains of Language and Music

    PubMed Central

    Bidelman, Gavin M.; Hutka, Stefanie; Moreno, Sylvain

    2013-01-01

    Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language. PMID:23565267

  5. Aberrant interference of auditory negative words on attention in patients with schizophrenia.

    PubMed

    Iwashiro, Norichika; Yahata, Noriaki; Kawamuro, Yu; Kasai, Kiyoto; Yamasue, Hidenori

    2013-01-01

    Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral) was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words).

  6. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    NASA Astrophysics Data System (ADS)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly, as an example of the significance of speech intelligibility evaluation in the context of an applicable listening environment, as indicated in this research, it was found that the jury test participants required on average an approximate 3 dB increase in sound pressure level of speech material while driving and listening compared to when just listening, for an equivalent speech intelligibility performance and the same listening task.

  7. Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.

    PubMed

    Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana

    2017-08-09

    The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech parsing are also preserved. We used a novel approach for studying the depth of speech processing across wakefulness and sleep while tracking neuronal activity with EEG. We found that responses to the auditory sound stream remained intact; however, the sleeping brain did not show signs of hierarchical parsing of the continuous stream of syllables into words, phrases, and sentences. The results suggest that sleep imposes a functional barrier between basic sensory processing and high-level cognitive processing. This paradigm also holds promise for studying residual cognitive abilities in a wide array of unresponsive states. Copyright © 2017 the authors 0270-6474/17/377772-10$15.00/0.

  8. Verbal short-term memory and vocabulary learning in polyglots.

    PubMed

    Papagno, C; Vallar, G

    1995-02-01

    Polyglot and non-polyglot Italian subjects were given tests assessing verbal (phonological) and visuo-spatial short-term and long-term memory, general intelligence, and vocabulary knowledge in their native language. Polyglots had a superior level of performance in verbal short-term memory tasks (auditory digit span and nonword repetition) and in a paired-associate learning test, which assessed the subjects' ability to acquire new (Russian) words. By contrast, the two groups had comparable performance levels in tasks assessing general intelligence, visuo-spatial short-term memory and learning, and paired-associate learning of Italian words. These findings, which are in line with neuropsychological and developmental evidence, as well as with data from normal subjects, suggest a close relationship between the capacity of phonological memory and the acquisition of foreign languages.

  9. Autonomous intelligent cars: proof that the EPSRC Principles are future-proof

    NASA Astrophysics Data System (ADS)

    de Cock Buning, Madeleine; de Bruin, Roeland

    2017-07-01

    Principle 2 of the EPSRC's principles of robotics (AISB workshop on Principles of Robotics, 2016) proves to be future proof when applied to the current state of the art of law and technology surrounding autonomous intelligent cars (AICs). Humans, not AICS, are responsible agents. AICs should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy by design. It will show that some legal questions arising from autonomous intelligent driving technology can be answered by the technology itself.

  10. Supertaskers: Profiles in extraordinary multitasking ability.

    PubMed

    Watson, Jason M; Strayer, David L

    2010-08-01

    Theory suggests that driving should be impaired for any motorist who is concurrently talking on a cell phone. But is everybody impaired by this dual-task combination? We tested 200 participants in a high-fidelity driving simulator in both single- and dual-task conditions. The dual task involved driving while performing a demanding auditory version of the operation span (OSPAN) task. Whereas the vast majority of participants showed significant performance decrements in dual-task conditions (compared with single-task conditions for either driving or OSPAN tasks), 2.5% of the sample showed absolutely no performance decrements with respect to performing single and dual tasks. In single-task conditions, these "supertaskers" scored in the top quartile on all dependent measures associated with driving and OSPAN tasks, and Monte Carlo simulations indicated that the frequency of supertaskers was significantly greater than chance. These individual differences help to sharpen our theoretical understanding of attention and cognitive control in naturalistic settings.

  11. Consequences of Stimulus Type on Higher-Order Processing in Single-Sided Deaf Cochlear Implant Users.

    PubMed

    Finke, Mareike; Sandmann, Pascale; Bönitz, Hanna; Kral, Andrej; Büchner, Andreas

    2016-01-01

    Single-sided deaf subjects with a cochlear implant (CI) provide the unique opportunity to compare central auditory processing of the electrical input (CI ear) and the acoustic input (normal-hearing, NH, ear) within the same individual. In these individuals, sensory processing differs between their two ears, while cognitive abilities are the same irrespectively of the sensory input. To better understand perceptual-cognitive factors modulating speech intelligibility with a CI, this electroencephalography study examined the central-auditory processing of words, the cognitive abilities, and the speech intelligibility in 10 postlingually single-sided deaf CI users. We found lower hit rates and prolonged response times for word classification during an oddball task for the CI ear when compared with the NH ear. Also, event-related potentials reflecting sensory (N1) and higher-order processing (N2/N4) were prolonged for word classification (targets versus nontargets) with the CI ear compared with the NH ear. Our results suggest that speech processing via the CI ear and the NH ear differs both at sensory (N1) and cognitive (N2/N4) processing stages, thereby affecting the behavioral performance for speech discrimination. These results provide objective evidence for cognition to be a key factor for speech perception under adverse listening conditions, such as the degraded speech signal provided from the CI. © 2016 S. Karger AG, Basel.

  12. The impact of self-driving cars on existing transportation networks

    NASA Astrophysics Data System (ADS)

    Ji, Xiang

    2018-04-01

    In this paper, considering the usage of self-driving, I research the congestion problems of traffic networks from both macro and micro levels. Firstly, the macroscopic mathematical model is established using the Greenshields function, analytic hierarchy process and Monte Carlo simulation, where the congestion level is divided into five levels according to the average vehicle speed. The roads with an obvious congestion situation is investigated mainly and the traffic flow and topology of the roads are analyzed firstly. By processing the data, I propose a traffic congestion model. In the model, I assume that half of the non-self-driving cars only take the shortest route and the other half can choose the path randomly. While self-driving cars can obtain vehicle density data of each road and choose the path more reasonable. When the path traffic density exceeds specific value, it cannot be selected. To overcome the dimensional differences of data, I rate the paths by BORDA sorting. The Monte Carlo simulation of Cellular Automaton is used to obtain the negative feedback information of the density of the traffic network, where the vehicles are added into the road network one by one. I then analyze the influence of negative feedback information on path selection of intelligent cars. The conclusion is that the increase of the proportion of intelligent vehicles will make the road load more balanced, and the self-driving cars can avoid the peak and reduce the degree of road congestion. Combined with other models, the optimal self-driving ratio is about sixty-two percent. From the microscopic aspect, by using the single-lane traffic NS rule, another model is established to analyze the road Partition scheme. The self-driving traffic is more intelligent, and their cooperation can reduce the random deceleration probability. By the model, I get the different self-driving ratio of space-time distribution. I also simulate the case of making a lane separately for self-driving, compared to the former model. It is concluded that a single lane is more efficient in a certain interval. However, it is not recommended to offer a lane separately. However, the self-driving also faces the problem of hacker attacks and greater damage after fault. So, when self-driving ratio is higher than a certain value, the increase of traffic flow rate is small. In this article, that value is discussed, and the optimal proportion is determined. Finally, I give a nontechnical explanation of the problem.

  13. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  14. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881

  15. Speech Recognition and Parent Ratings From Auditory Development Questionnaires in Children Who Are Hard of Hearing.

    PubMed

    McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.

  16. Auditory plasticity in deaf children with bilateral cochlear implants

    NASA Astrophysics Data System (ADS)

    Litovsky, Ruth

    2005-04-01

    Human children with cochlear implants represent a unique population of individuals who have undergone variable amounts of auditory deprivation prior to being able to hear. Even more unique are children who received bilateral cochlear implants (BICIs), in sequential surgical procedures, several years apart. Auditory deprivation in these individuals consists of a two-stage process, whereby complete deafness is experienced initially, followed by deafness in one ear. We studied the effects of post-implant experience on the ability of deaf children to localize sounds and to understand speech in noise. These are two of the most important functions that are known to depend on binaural hearing. Children were tested at time intervals ranging from 3-months to 24-months following implantation of the second ear, while listening with either implant alone or bilaterally. Our findings suggest that the period during which plasticity occurs in human binaural system is protracted, extending into middle-to-late childhood. The rate at which benefits from bilateral hearing abilities are attained following deprivation is faster for speech intelligibility in noise compared with sound localization. Finally, the age at which the second implant was received may play an important role in the acquisition of binaural abilities. [Work supported by NIH-NIDCD.

  17. Intelligibility of speech in a virtual 3-D environment.

    PubMed

    MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J

    2002-01-01

    In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.

  18. Cooperative Adaptive Cruise Control Human Factors Study : Experiment 3 : The Role of Automated Braking and Auditory Alert in Collision Avoidance Response

    DOT National Transportation Integrated Search

    2016-12-01

    This report is the third in a series of four human factors experiments to examine the effects of cooperative adaptive cruise control (CACC) on driver performance in a variety of situations. The experiment reported here was conducted in a driving simu...

  19. Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.

    PubMed

    Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.

  20. Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex

    PubMed Central

    Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157

  1. Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.

    PubMed

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.

  2. Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers

    PubMed Central

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616

  3. [Effects of auditory integrative training on autistic children].

    PubMed

    Zhang, Gai-qiao; Gong, Qun; Zhang, Feng-ling; Chen, Sun-min; Hu, Li-qun; Liu, Feng; Cui, Rui-hua; He, Lin

    2009-08-18

    To explore the short-term treatment effect of the auditory integrative training on autistic children and provide them with clinical support for rehabilitative treatment. A total of 81 cases of autistic children were selected through the standard of DSM-4 and clinical case study was used. They were divided randomly into experimental group and control one, and respectively received auditory integrative training and no training based on the multiple therapies. The patients were investigated using clinical manifestation and Autism Behavior Checklist (ABC) and intelligence quotient (IQ) before and after six months of treatment. The effect was evaluated through the changes of clinical manifestations and scores of ABC and IQ. The changes of scores of IQ were determined with Gesell and WPPSI or WISC-R. Compared with 40 patients of the control group after the six months of the auditory integrative training, 41 of the experimental group had greatly improved in many aspects, such as the disorders of their language, social interactions and typical behavior symptoms while they had not changed in their abnormal behaviors. The scores of IQ or DQ had increased and scores of ABC had dropped. The differences between the two groups were greatly significant in statistics (P < 0.01). The decreasing level of both ABC scores and the increasing level of the IQ scores were negatively correlated with age, and the decreasing level of ABC scores was in line regression(positive correlation) with base IQ. The treatment of auditory integrative training (AIT) could greatly improve on language disorders, the difficulties of social interactions, typical behavior symptoms and developmental levels,therefore it is positive to the autistic children in its short-term treatment effect.

  4. Recognizing Spoken Words: The Neighborhood Activation Model

    PubMed Central

    Luce, Paul A.; Pisoni, David B.

    2012-01-01

    Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270

  5. Towards an intelligent wheelchair system for users with cerebral palsy.

    PubMed

    Montesano, Luis; Díaz, Marta; Bhaskar, Sonu; Minguez, Javier

    2010-04-01

    This paper describes and evaluates an intelligent wheelchair, adapted for users with cognitive disabilities and mobility impairment. The study focuses on patients with cerebral palsy, one of the most common disorders affecting muscle control and coordination, thereby impairing movement. The wheelchair concept is an assistive device that allows the user to select arbitrary local destinations through a tactile screen interface. The device incorporates an automatic navigation system that drives the vehicle, avoiding obstacles even in unknown and dynamic scenarios. It provides the user with a high degree of autonomy, independent from a particular environment, i.e., not restricted to predefined conditions. To evaluate the rehabilitation device, a study was carried out with four subjects with cognitive impairments, between 11 and 16 years of age. They were first trained so as to get acquainted with the tactile interface and then were recruited to drive the wheelchair. Based on the experience with the subjects, an extensive evaluation of the intelligent wheelchair was provided from two perspectives: 1) based on the technical performance of the entire system and its components and 2) based on the behavior of the user (execution analysis, activity analysis, and competence analysis). The results indicated that the intelligent wheelchair effectively provided mobility and autonomy to the target population.

  6. Attention and driving performance modulations due to anger state: Contribution of electroencephalographic data.

    PubMed

    Techer, Franck; Jallais, Christophe; Corson, Yves; Moreau, Fabien; Ndiaye, Daniel; Piechnick, Bruno; Fort, Alexandra

    2017-01-01

    Driver internal state, including emotion, can have negative impacts on road safety. Studies have shown that an anger state can provoke aggressive behavior and impair driving performance. Apart from driving, anger can also influence attentional processing and increase the benefits taken from auditory alerts. However, to our knowledge, no prior event-related potentials study assesses this impact on attention during simulated driving. Therefore, the aim of this study was to investigate the impact of anger on attentional processing and its consequences on driving performance. For this purpose, 33 participants completed a simulated driving scenario once in an anger state and once during a control session. Results indicated that anger impacted driving performance and attention, provoking an increase in lateral variations while reducing the amplitude of the visual N1 peak. The observed effects were discussed as a result of high arousal and mind-wandering associated with anger. This kind of physiological data may be used to monitor a driver's internal state and provide specific assistance corresponding to their current needs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Analysis of masking effects on speech intelligibility with respect to moving sound stimulus

    NASA Astrophysics Data System (ADS)

    Chen, Chiung Yao

    2004-05-01

    The purpose of this study is to compare the disturbed degree of speech by an immovable noise source and an apparent moving one (AMN). In the study of the sound localization, we found that source-directional sensitivity (SDS) well associates with the magnitude of interaural cross correlation (IACC). Ando et al. [Y. Ando, S. H. Kang, and H. Nagamatsu, J. Acoust. Soc. Jpn. (E) 8, 183-190 (1987)] reported that potential correlation between left and right inferior colliculus at auditory path in the brain is in harmony with the correlation function of amplitude input into two ear-canal entrances. We assume that the degree of disturbance under the apparent moving noisy source is probably different from that being installed in front of us within a constant distance in a free field (no reflection). Then, we found there is a different influence on speech intelligibility between a moving and a fixed source generated by 1/3-octave narrow-band noise with the center frequency 2 kHz. However, the reasons for the moving speed and the masking effects on speech intelligibility were uncertain.

  8. Temporal Resolution Needed for Auditory Communication: Measurement With Mosaic Speech

    PubMed Central

    Nakajima, Yoshitaka; Matsuda, Mizuki; Ueda, Kazuo; Remijn, Gerard B.

    2018-01-01

    Temporal resolution needed for Japanese speech communication was measured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for healthy listeners to perceive speech is introduced. As a first step, we report listeners' intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called “mosaic speech”: speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of <100 ms can be remarkably robust in terms of linguistic-content perception against drastic manipulations in each segment, such as partial signal omission or temporal reversal. The human perceptual system thus can extract meaning from unexpectedly rough temporal information in speech. The process resembles that of the visual system stringing together static movie frames of ~40 ms into vivid motion. PMID:29740295

  9. [The effection of white matter abnormality to auditory and speech rehabilitation after cochlear implantation in prelingual deafness children].

    PubMed

    Zhang, X Y; Liang, M J; Liu, J H; Li, X H; Zhen, Y Q; Weng, Y L

    2017-04-20

    Objective: To investigatethe effection of white matter abnormality to auditory and speech rehabilitation after cochlear implantation in prelingual deafness children. Method: Thirty-five children with white matter abnormality were included in this study. The degree of leukoaraiosis was evaluated by Scheltens scale based on MRI.The hearing and speechrecovery level was rated by auditory behavior grading standards(CAP) and speech intelligibility grading standards(SIR) at 6 months, 12 months, and 24 months post operation. Result: The CAP scores and SIR scores of the children with white matter abnormality were lower than those of the control group at 6 months after operation ( P <0.05).The SIR scores of the children with white matter abnormality at 12 months and 24 months post operation were significantly lower than those of the control group.There was no statistically significant difference between the CAP scores of the two groups at 12 and 24 months after operation( P >0.05).Schelten classification had a greater impact on SIR scores than on CAP scores. Conclusion: The effect of white matter abnormality on auditory and speech rehabilitation after cochlear implantation was related to the degree of leukoencephalopathy. When the lesion of white matter abnormality was larger, the level of hearing and verbal rehabilitation was lower, and the speech rehabilitation was more significantly impacted by white matter lesions degree. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  10. Abnormal Complex Auditory Pattern Analysis in Schizophrenia Reflected in an Absent Missing Stimulus Mismatch Negativity.

    PubMed

    Salisbury, Dean F; McCathern, Alexis G

    2016-11-01

    The simple mismatch negativity (MMN) to tones deviating physically (in pitch, loudness, duration, etc.) from repeated standard tones is robustly reduced in schizophrenia. Although generally interpreted to reflect memory or cognitive processes, simple MMN likely contains some activity from non-adapted sensory cells, clouding what process is affected in schizophrenia. Research in healthy participants has demonstrated that MMN can be elicited by deviations from abstract auditory patterns and complex rules that do not cause sensory adaptation. Whether persons with schizophrenia show abnormalities in the complex MMN is unknown. Fourteen schizophrenia participants and 16 matched healthy underwent EEG recording while listening to 400 groups of 6 tones 330 ms apart, separated by 800 ms. Occasional deviant groups were missing the 4th or 6th tone (50 groups each). Healthy participants generated a robust response to a missing but expected tone. The schizophrenia group was significantly impaired in activating the missing stimulus MMN, generating no significant activity at all. Schizophrenia affects the ability of "primitive sensory intelligence" and pre-attentive perceptual mechanisms to form implicit groups in the auditory environment. Importantly, this deficit must relate to abnormalities in abstract complex pattern analysis rather than sensory problems in the disorder. The results indicate a deficit in parsing of the complex auditory scene which likely impacts negatively on successful social navigation in schizophrenia. Knowledge of the location and circuit architecture underlying the true novelty-related MMN and its pathophysiology in schizophrenia will help target future interventions.

  11. Spectral context affects temporal processing in awake auditory cortex

    PubMed Central

    Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.

    2013-01-01

    Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811

  12. Exploring the roles of spectral detail and intonation contour in speech intelligibility: an FMRI study.

    PubMed

    Kyong, Jeong S; Scott, Sophie K; Rosen, Stuart; Howe, Timothy B; Agnew, Zarinah K; McGettigan, Carolyn

    2014-08-01

    The melodic contour of speech forms an important perceptual aspect of tonal and nontonal languages and an important limiting factor on the intelligibility of speech heard through a cochlear implant. Previous work exploring the neural correlates of speech comprehension identified a left-dominant pathway in the temporal lobes supporting the extraction of an intelligible linguistic message, whereas the right anterior temporal lobe showed an overall preference for signals clearly conveying dynamic pitch information [Johnsrude, I. S., Penhune, V. B., & Zatorre, R. J. Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain, 123, 155-163, 2000; Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000]. The current study combined modulations of overall intelligibility (through vocoding and spectral inversion) with a manipulation of pitch contour (normal vs. falling) to investigate the processing of spoken sentences in functional MRI. Our overall findings replicate and extend those of Scott et al. [Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000], where greater sentence intelligibility was predominately associated with increased activity in the left STS, and the greatest response to normal sentence melody was found in right superior temporal gyrus. These data suggest a spatial distinction between brain areas associated with intelligibility and those involved in the processing of dynamic pitch information in speech. By including a set of complexity-matched unintelligible conditions created by spectral inversion, this is additionally the first study reporting a fully factorial exploration of spectrotemporal complexity and spectral inversion as they relate to the neural processing of speech intelligibility. Perhaps surprisingly, there was little evidence for an interaction between the two factors-we discuss the implications for the processing of sound and speech in the dorsolateral temporal lobes.

  13. Utilising reinforcement learning to develop strategies for driving auditory neural implants.

    PubMed

    Lee, Geoffrey W; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G

    2016-08-01

    In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model's function. We show the ability to effectively learn stimulation patterns which mimic the cochlea's ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.

  14. Developmental Profiling of Spiral Ganglion Neurons Reveals Insights into Auditory Circuit Assembly

    PubMed Central

    Lu, Cindy C.; Appler, Jessica M.; Houseman, E. Andres; Goodrich, Lisa V.

    2011-01-01

    The sense of hearing depends on the faithful transmission of sound information from the ear to the brain by spiral ganglion (SG) neurons. However, how SG neurons develop the connections and properties that underlie auditory processing is largely unknown. We catalogued gene expression in mouse SG neurons from embryonic day 12 (E12), when SG neurons first extend projections, up until postnatal day 15 (P15), after the onset of hearing. For comparison, we also analyzed the closely-related vestibular ganglion (VG). Gene ontology analysis confirmed enriched expression of genes associated with gene regulation and neurite outgrowth at early stages, with the SG and VG often expressing different members of the same gene family. At later stages, the neurons transcribe more genes related to mature function, and exhibit a dramatic increase in immune gene expression. Comparisons of the two populations revealed enhanced expression of TGFβ pathway components in SG neurons and established new markers that consistently distinguish auditory and vestibular neurons. Unexpectedly, we found that Gata3, a transcription factor commonly associated with auditory development, is also expressed in VG neurons at early stages. We therefore defined new cohorts of transcription factors and axon guidance molecules that are uniquely expressed in SG neurons and may drive auditory-specific aspects of their differentiation and wiring. We show that one of these molecules, the receptor guanylyl cyclase Npr2, is required for bifurcation of the SG central axon. Hence, our data set provides a useful resource for uncovering the molecular basis of specific auditory circuit assembly events. PMID:21795542

  15. Neuroanatomical Evidence for Catecholamines as Modulators of Audition and Acoustic Behavior in a Vocal Teleost.

    PubMed

    Forlano, Paul M; Sisneros, Joseph A

    2016-01-01

    The plainfin midshipman fish (Porichthys notatus) is a well-studied model to understand the neural and endocrine mechanisms underlying vocal-acoustic communication across vertebrates. It is well established that steroid hormones such as estrogen drive seasonal peripheral auditory plasticity in female Porichthys in order to better encode the male's advertisement call. However, little is known of the neural substrates that underlie the motivation and coordinated behavioral response to auditory social signals. Catecholamines, which include dopamine and noradrenaline, are good candidates for this function, as they are thought to modulate the salience of and reinforce appropriate behavior to socially relevant stimuli. This chapter summarizes our recent studies which aimed to characterize catecholamine innervation in the central and peripheral auditory system of Porichthys as well as test the hypotheses that innervation of the auditory system is seasonally plastic and catecholaminergic neurons are activated in response to conspecific vocalizations. Of particular significance is the discovery of direct dopaminergic innervation of the saccule, the main hearing end organ, by neurons in the diencephalon, which also robustly innervate the cholinergic auditory efferent nucleus in the hindbrain. Seasonal changes in dopamine innervation in both these areas appear dependent on reproductive state in females and may ultimately function to modulate the sensitivity of the peripheral auditory system as an adaptation to the seasonally changing soundscape. Diencephalic dopaminergic neurons are indeed active in response to exposure to midshipman vocalizations and are in a perfect position to integrate the detection and appropriate motor response to conspecific acoustic signals for successful reproduction.

  16. Predictors of Hearing-Aid Outcomes

    PubMed Central

    Johannesen, Peter T.; Pérez-González, Patricia; Blanco, José L.; Kalluri, Sridhar; Edwards, Brent

    2017-01-01

    Over 360 million people worldwide suffer from disabling hearing loss. Most of them can be treated with hearing aids. Unfortunately, performance with hearing aids and the benefit obtained from using them vary widely across users. Here, we investigate the reasons for such variability. Sixty-eight hearing-aid users or candidates were fitted bilaterally with nonlinear hearing aids using standard procedures. Treatment outcome was assessed by measuring aided speech intelligibility in a time-reversed two-talker background and self-reported improvement in hearing ability. Statistical predictive models of these outcomes were obtained using linear combinations of 19 predictors, including demographic and audiological data, indicators of cochlear mechanical dysfunction and auditory temporal processing skills, hearing-aid settings, working memory capacity, and pretreatment self-perceived hearing ability. Aided intelligibility tended to be better for younger hearing-aid users with good unaided intelligibility in quiet and with good temporal processing abilities. Intelligibility tended to improve by increasing amplification for low-intensity sounds and by using more linear amplification for high-intensity sounds. Self-reported improvement in hearing ability was hard to predict but tended to be smaller for users with better working memory capacity. Indicators of cochlear mechanical dysfunction, alone or in combination with hearing settings, did not affect outcome predictions. The results may be useful for improving hearing aids and setting patients’ expectations. PMID:28929903

  17. Design and validation of an intelligent wheelchair towards a clinically-functional outcome.

    PubMed

    Boucher, Patrice; Atrash, Amin; Kelouwani, Sousso; Honoré, Wormser; Nguyen, Hai; Villemure, Julien; Routhier, François; Cohen, Paul; Demers, Louise; Forget, Robert; Pineau, Joelle

    2013-06-17

    Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode.

  18. Dense Neighborhoods and Mechanisms of Learning: Evidence from Children with Phonological Delay

    ERIC Educational Resources Information Center

    Gierut, Judith A.; Morrisette, Michele L.

    2015-01-01

    There is a noted advantage of dense neighborhoods in language acquisition, but the learning mechanism that drives the effect is not well understood. Two hypotheses--long-term auditory word priming and phonological working memory--have been advanced in the literature as viable accounts. These were evaluated in two treatment studies enrolling twelve…

  19. Humans, Intelligent Technology, and Their Interface: A Study of Brown’s Point

    DTIC Science & Technology

    2017-12-01

    known about the role of drivers. When combining humans and intelligent technology (machines), such as self-driving vehicles, how people think about...disrupt the entire transportation industry and potentially change how society moves people and goods. The findings of the investigation are likely...The power of suggestion is very important to understand and consider when framing and bringing meaning to new technology, which points to looking at

  20. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  1. 77 FR 15086 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... Defense Intelligence Information System (DoDIIS) Customer Relationship Management System. The records will... instructions for submitting comments. * Mail: Federal Docket Management System Office, 4800 Mark Center Drive...

  2. Sensory hair cell development and regeneration: similarities and differences

    PubMed Central

    Atkinson, Patrick J.; Huarcaya Najarro, Elvis; Sayyid, Zahra N.; Cheng, Alan G.

    2015-01-01

    Sensory hair cells are mechanoreceptors of the auditory and vestibular systems and are crucial for hearing and balance. In adult mammals, auditory hair cells are unable to regenerate, and damage to these cells results in permanent hearing loss. By contrast, hair cells in the chick cochlea and the zebrafish lateral line are able to regenerate, prompting studies into the signaling pathways, morphogen gradients and transcription factors that regulate hair cell development and regeneration in various species. Here, we review these findings and discuss how various signaling pathways and factors function to modulate sensory hair cell development and regeneration. By comparing and contrasting development and regeneration, we also highlight the utility and limitations of using defined developmental cues to drive mammalian hair cell regeneration. PMID:25922522

  3. DriveID: safety innovation through individuation.

    PubMed

    Sawyer, Ben; Teo, Grace; Mouloua, Mustapha

    2012-01-01

    The driving task is highly complex and places considerable perceptual, physical and cognitive demands on the driver. As driving is fundamentally an information processing activity, distracted or impaired drivers have diminished safety margins compared with non- distracted drivers (Hancock and Parasuraman, 1992; TRB 1998 a & b). This competition for sensory and decision making capacities can lead to failures that cost lives. Some groups, teens and elderly drivers for example, have patterns of systematically poor perceptual, physical and cognitive performance while driving. Although there are technologies developed to aid these different drivers, these systems are often misused and underutilized. The DriveID project aims to design and develop a passive, automated face identification system capable of robustly identifying the driver of the vehicle, retrieve a stored profile, and intelligently prescribing specific accident prevention systems and driving environment customizations.

  4. Leukoaraiosis Significantly Worsens Driving Performance of Ordinary Older Drivers

    PubMed Central

    Zheng, Rencheng; Fang, Fang; Ohori, Masanori; Nakamura, Hiroki; Kumagai, Yasuhiho; Okada, Hiroshi; Teramura, Kazuhiko; Nakayama, Satoshi; Irimajiri, Akinori; Taoka, Hiroshi; Okada, Satoshi

    2014-01-01

    Background Leukoaraiosis is defined as extracellular space caused mainly by atherosclerotic or demyelinated changes in the brain tissue and is commonly found in the brains of healthy older people. A significant association between leukoaraiosis and traffic crashes was reported in our previous study; however, the reason for this is still unclear. Method This paper presents a comprehensive evaluation of driving performance in ordinary older drivers with leukoaraiosis. First, the degree of leukoaraiosis was examined in 33 participants, who underwent an actual-vehicle driving examination on a standard driving course, and a driver skill rating was also collected while the driver carried out a paced auditory serial addition test, which is a calculating task given verbally. At the same time, a steering entropy method was used to estimate steering operation performance. Results The experimental results indicated that a normal older driver with leukoaraiosis was readily affected by external disturbances and made more operation errors and steered less smoothly than one without leukoaraiosis during driving; at the same time, their steering skill significantly deteriorated. Conclusions Leukoaraiosis worsens the driving performance of older drivers because of their increased vulnerability to distraction. PMID:25295736

  5. Computer-based auditory phoneme discrimination training improves speech recognition in noise in experienced adult cochlear implant listeners.

    PubMed

    Schumann, Annette; Serman, Maja; Gefeller, Olaf; Hoppe, Ulrich

    2015-03-01

    Specific computer-based auditory training may be a useful completion in the rehabilitation process for cochlear implant (CI) listeners to achieve sufficient speech intelligibility. This study evaluated the effectiveness of a computerized, phoneme-discrimination training programme. The study employed a pretest-post-test design; participants were randomly assigned to the training or control group. Over a period of three weeks, the training group was instructed to train in phoneme discrimination via computer, twice a week. Sentence recognition in different noise conditions (moderate to difficult) was tested pre- and post-training, and six months after the training was completed. The control group was tested and retested within one month. Twenty-seven adult CI listeners who had been using cochlear implants for more than two years participated in the programme; 15 adults in the training group, 12 adults in the control group. Besides significant improvements for the trained phoneme-identification task, a generalized training effect was noted via significantly improved sentence recognition in moderate noise. No significant changes were noted in the difficult noise conditions. Improved performance was maintained over an extended period. Phoneme-discrimination training improves experienced CI listeners' speech perception in noise. Additional research is needed to optimize auditory training for individual benefit.

  6. Making non-fluent aphasics speak: sing along!

    PubMed

    Racette, Amélie; Bard, Céline; Peretz, Isabelle

    2006-10-01

    A classic observation in neurology is that aphasics can sing words they cannot pronounce otherwise. To further assess this claim, we investigated the production of sung and spoken utterances in eight brain-damaged patients suffering from a variety of speech disorders as a consequence of a left-hemisphere lesion. In Experiment 1, the patients were tested in the repetition and recall of words and notes of familiar material. Lyrics of familiar songs, as well as words of proverbs and prayers, were not better pronounced in singing than in speaking. Notes were better produced than words. In Experiment 2, the aphasic patients repeated and recalled lyrics from novel songs. Again, they did not produce more words in singing than in speaking. In Experiment 3, when allowed to sing or speak along with an auditory model while learning novel songs, aphasics repeated and recalled more words when singing than when speaking. Reduced speed or shadowing cannot account for this advantage of singing along over speaking in unison. The results suggest that singing in synchrony with an auditory model--choral singing--is more effective than choral speech, at least in French, in improving word intelligibility because choral singing may entrain more than one auditory-vocal interface. Thus, choral singing appears to be an effective means of speech therapy.

  7. Pediatric Auditory Brainstem Implant Surgery: A New Option for Auditory Habilitation in Congenital Deafness?

    PubMed

    Shah, Parth V; Kozin, Elliott D; Kaplan, Alyson B; Lee, Daniel J

    2016-01-01

    The auditory brainstem implant (ABI) is a neuroprosthetic device that provides sound sensations to individuals with profound hearing loss who are not candidates for a cochlear implant (CI) because of anatomic constraints. Herein we describe the ABI for family physicians. PubMed was searched to identify articles relevant to the ABI, as well as articles that contain outcomes data for pediatric patients (age <18 years) who have undergone ABI surgery. The ABI was originally developed for patients with neurofibromatosis type 2 (NF2) who become deaf from bilateral vestibular schwannomas. Over the past decade, indications for an ABI have expanded to adult patients without tumors (without NF2) who cannot receive a CI and children with no cochlea or cochlear nerve. Outcomes among NF2 ABI users are modest compared to cochlear implant patients, but recent studies from Europe suggest that some non-tumor adult and pediatric ABI users achieve speech perception. The ABI is a reasonable surgical option for children with profound hearing loss due to severe cochlear or cochlear nerve deformities. Continued prospective data collection from several clinical trials in the U.S. will provide greater understanding on long term outcomes that focus on speech intelligibility. © Copyright 2016 by the American Board of Family Medicine.

  8. Influence of seasonal variation in mood and behavior on cognitive test performance among young adults.

    PubMed

    Merikanto, Ilona; Lahti, Tuuli; Castaneda, Anu E; Tuulio-Henriksson, Annamari; Aalto-Setälä, Terhi; Suvisaari, Jaana; Partonen, Timo

    2012-10-01

    Seasonal variations in mood and behavior are common among the general population and may have a deteriorating effect on cognitive functions. In this study the effect of seasonal affective disorder (SAD-like symptoms) on cognitive test performance were evaluated in more detail. The data were derived from the study Mental Health in Early Adulthood in Finland. Participants (n = 481) filled in a modified Seasonal Pattern Assessment Questionnaire (SPAQ) and performed cognitive tests in verbal and visual skills, attention and general intelligence. SAD-like symptoms, especially regarding the seasonal variations in weight and appetite, had a significant effect on working memory (Digit Span Backward, P = 0.008) and auditory attention and short-term memory (Digit Span Forward, P = 0.004). The seasonal variations in sleep duration and mood had an effect on auditory attention and short-term memory (Digit Span Forward, P = 0.02 and P = 0.0002, respectively). The seasonal variations in social activity and energy level had no effect. Seasonal changes in mood, appetite and weight have an impairing effect on auditory attention and processing speed. If performance tests are not to repeated in different seasons, attention needs to be given to the most appropriate season in which to test.

  9. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Autonomous driving in urban environments: approaches, lessons and challenges.

    PubMed

    Campbell, Mark; Egerstedt, Magnus; How, Jonathan P; Murray, Richard M

    2010-10-13

    The development of autonomous vehicles for urban driving has seen rapid progress in the past 30 years. This paper provides a summary of the current state of the art in autonomous driving in urban environments, based primarily on the experiences of the authors in the 2007 DARPA Urban Challenge (DUC). The paper briefly summarizes the approaches that different teams used in the DUC, with the goal of describing some of the challenges that the teams faced in driving in urban environments. The paper also highlights the long-term research challenges that must be overcome in order to enable autonomous driving and points to opportunities for new technologies to be applied in improving vehicle safety, exploiting intelligent road infrastructure and enabling robotic vehicles operating in human environments.

  11. Drive Control System for Pipeline Crawl Robot Based on CAN Bus

    NASA Astrophysics Data System (ADS)

    Chen, H. J.; Gao, B. T.; Zhang, X. H.; Deng2, Z. Q.

    2006-10-01

    Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot.

  12. A Strain-Based Method to Detect Tires' Loss of Grip and Estimate Lateral Friction Coefficient from Experimental Data by Fuzzy Logic for Intelligent Tire Development.

    PubMed

    Yunta, Jorge; Garcia-Pozuelo, Daniel; Diaz, Vicente; Olatunbosun, Oluremi

    2018-02-06

    Tires are a key sub-system of vehicles that have a big responsibility for comfort, fuel consumption and traffic safety. However, current tires are just passive rubber elements which do not contribute actively to improve the driving experience or vehicle safety. The lack of information from the tire during driving gives cause for developing an intelligent tire. Therefore, the aim of the intelligent tire is to monitor tire working conditions in real-time, providing useful information to other systems and becoming an active system. In this paper, tire tread deformation is measured to provide a strong experimental base with different experiments and test results by means of a tire fitted with sensors. Tests under different working conditions such as vertical load or slip angle have been carried out with an indoor tire test rig. The experimental data analysis shows the strong relation that exists between lateral force and the maximum tensile and compressive strain peaks when the tire is not working at the limit of grip. In the last section, an estimation system from experimental data has been developed and implemented in Simulink to show the potential of strain sensors for developing intelligent tire systems, obtaining as major results a signal to detect tire's loss of grip and estimations of the lateral friction coefficient.

  13. A Strain-Based Method to Detect Tires’ Loss of Grip and Estimate Lateral Friction Coefficient from Experimental Data by Fuzzy Logic for Intelligent Tire Development

    PubMed Central

    Garcia-Pozuelo, Daniel; Diaz, Vicente; Olatunbosun, Oluremi

    2018-01-01

    Tires are a key sub-system of vehicles that have a big responsibility for comfort, fuel consumption and traffic safety. However, current tires are just passive rubber elements which do not contribute actively to improve the driving experience or vehicle safety. The lack of information from the tire during driving gives cause for developing an intelligent tire. Therefore, the aim of the intelligent tire is to monitor tire working conditions in real-time, providing useful information to other systems and becoming an active system. In this paper, tire tread deformation is measured to provide a strong experimental base with different experiments and test results by means of a tire fitted with sensors. Tests under different working conditions such as vertical load or slip angle have been carried out with an indoor tire test rig. The experimental data analysis shows the strong relation that exists between lateral force and the maximum tensile and compressive strain peaks when the tire is not working at the limit of grip. In the last section, an estimation system from experimental data has been developed and implemented in Simulink to show the potential of strain sensors for developing intelligent tire systems, obtaining as major results a signal to detect tire’s loss of grip and estimations of the lateral friction coefficient. PMID:29415513

  14. Mate choice in the eye and ear of the beholder? Female multimodal sensory configuration influences her preferences.

    PubMed

    Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R

    2018-05-16

    A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).

  15. Combining computerized social cognitive training with neuroplasticity-based auditory training in schizophrenia.

    PubMed

    Sacks, Stephanie; Fisher, Melissa; Garrett, Coleman; Alexander, Phillip; Holland, Christine; Rose, Demian; Hooker, Christine; Vinogradov, Sophia

    2013-01-01

    Social cognitive deficits are an important treatment target in schizophrenia, but it is unclear to what degree they require specialized interventions and which specific components of behavioral interventions are effective. In this pilot study, we explored the effects of a novel computerized neuroplasticity-based auditory training delivered in conjunction with computerized social cognition training (SCT) in patients with schizophrenia. Nineteen clinically stable schizophrenia subjects performed 50 hours of computerized exercises that place implicit, increasing demands on auditory perception, plus 12 hours of computerized training in emotion identification, social perception, and theory of mind tasks. All subjects were assessed with MATRICS-recommended measures of neurocognition and social cognition, plus a measure of self-referential source memory before and after the computerized training. Subjects showed significant improvements on multiple measures of neurocognition. Additionally, subjects showed significant gains on measures of social cognition, including the MSCEIT Perceiving Emotions, MSCEIT Managing Emotions, and self-referential source memory, plus a significant decrease in positive symptoms. Computerized training of auditory processing/verbal learning in schizophrenia results in significant basic neurocognitive gains. Further, addition of computerized social cognition training results in significant gains in several social cognitive outcome measures. Computerized cognitive training that directly targets social cognitive processes can drive improvements in these crucial functions.

  16. A comparative study of simple auditory reaction time in blind (congenitally) and sighted subjects.

    PubMed

    Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A; Mehta, H B; Shah, C J

    2013-07-01

    Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. STUDY HAD BEEN CARRIED OUT IN TWO GROUPS: The 1(st) of 50 congenitally blind subjects and 2(nd) group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people.

  17. Novel Propulsion and Power Concepts for 21st Century Aviation

    NASA Technical Reports Server (NTRS)

    Sehra, Arun K.

    2003-01-01

    The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA s vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and NO(x)). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines. And the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions.

  18. Intelligent vehicle safety control strategy in various driving situations

    NASA Astrophysics Data System (ADS)

    Moon, Seungwuk; Cho, Wanki; Yi, Kyongsu

    2010-12-01

    This paper describes a safety control strategy for intelligent vehicles with the objective of optimally coordinating the throttle, brake, and active front steering actuator inputs to obtain both lateral stability and longitudinal safety. The control system consists of a supervisor, control algorithms, and a coordinator. From the measurement and estimation signals, the supervisor determines the active control modes among normal driving, longitudinal safety, lateral stability, and integrated safety control mode. The control algorithms consist of longitudinal and lateral stability controllers. The longitudinal controller is designed to improve the driver's comfort during normal, safe-driving situations, and to avoid rear-end collision in vehicle-following situations. The lateral stability controller is designed to obtain the required manoeuvrability and to limit the vehicle body's side-slip angle. To obtain both longitudinal safety and lateral stability control in various driving situations, the coordinator optimally determines the throttle, brake, and active front steering inputs based on the current status of the subject vehicle. Closed-loop simulations with the driver-vehicle-controller system are conducted to investigate the performance of the proposed control strategy. From these simulation results, it is shown that the proposed control algorithm assists the driver in combined severe braking/large steering manoeuvring so that the driver can maintain good manoeuvrability and prevent the vehicle from crashing in vehicle-following situations.

  19. Sensor Systems for Vehicle Environment Perception in a Highway Intelligent Space System

    PubMed Central

    Tang, Xiaofeng; Gao, Feng; Xu, Guoyan; Ding, Nenggen; Cai, Yao; Ma, Mingming; Liu, Jianxing

    2014-01-01

    A Highway Intelligent Space System (HISS) is proposed to study vehicle environment perception in this paper. The nature of HISS is that a space sensors system using laser, ultrasonic or radar sensors are installed in a highway environment and communication technology is used to realize the information exchange between the HISS server and vehicles, which provides vehicles with the surrounding road information. Considering the high-speed feature of vehicles on highways, when vehicles will be passing a road ahead that is prone to accidents, the vehicle driving state should be predicted to ensure drivers have road environment perception information in advance, thereby ensuring vehicle driving safety and stability. In order to verify the accuracy and feasibility of the HISS, a traditional vehicle-mounted sensor system for environment perception is used to obtain the relative driving state. Furthermore, an inter-vehicle dynamics model is built and model predictive control approach is used to predict the driving state in the following period. Finally, the simulation results shows that using the HISS for environment perception can arrive at the same results detected by a traditional vehicle-mounted sensors system. Meanwhile, we can further draw the conclusion that using HISS to realize vehicle environment perception can ensure system stability, thereby demonstrating the method's feasibility. PMID:24834907

  20. Multi-objective decoupling algorithm for active distance control of intelligent hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Luo, Yugong; Chen, Tao; Li, Keqiang

    2015-12-01

    The paper presents a novel active distance control strategy for intelligent hybrid electric vehicles (IHEV) with the purpose of guaranteeing an optimal performance in view of the driving functions, optimum safety, fuel economy and ride comfort. Considering the complexity of driving situations, the objects of safety and ride comfort are decoupled from that of fuel economy, and a hierarchical control architecture is adopted to improve the real-time performance and the adaptability. The hierarchical control structure consists of four layers: active distance control object determination, comprehensive driving and braking torque calculation, comprehensive torque distribution and torque coordination. The safety distance control and the emergency stop algorithms are designed to achieve the safety and ride comfort goals. The optimal rule-based energy management algorithm of the hybrid electric system is developed to improve the fuel economy. The torque coordination control strategy is proposed to regulate engine torque, motor torque and hydraulic braking torque to improve the ride comfort. This strategy is verified by simulation and experiment using a forward simulation platform and a prototype vehicle. The results show that the novel control strategy can achieve the integrated and coordinated control of its multiple subsystems, which guarantees top performance of the driving functions and optimum safety, fuel economy and ride comfort.

  1. The rhesus monkey (Macaca mulatta) as a flight candidate

    NASA Technical Reports Server (NTRS)

    Debourne, M. N. G.; Bourne, G. H.; Mcclure, H. M.

    1977-01-01

    The intelligence and ruggedness of rhesus monkeys, as well as the abundance of normative data on their anatomy, physiology, and biochemistry, and the availability of captive bred animals qualify them for selection as candidates for orbital flight and weightlessness studies. Baseline data discussed include: physical characteristics, auditory thresholds, visual accuity, blood, serological taxomony, immunogenetics, cytogenics, circadian rhythms, respiration, cardiovascular values, corticosteroid response to charr restraint, microscopy of tissues, pathology, nutrition, and learning skills. Results from various tests used to establish the baseline data are presented in tables.

  2. Blue-Enriched White Light Enhances Physiological Arousal But Not Behavioral Performance during Simulated Driving at Early Night

    PubMed Central

    Rodríguez-Morilla, Beatriz; Madrid, Juan A.; Molina, Enrique; Correa, Angel

    2017-01-01

    Vigilance usually deteriorates over prolonged driving at non-optimal times of day. Exposure to blue-enriched light has shown to enhance arousal, leading to behavioral benefits in some cognitive tasks. However, the cognitive effects of long-wavelength light have been less studied and its effects on driving performance remained to be addressed. We tested the effects of a blue-enriched white light (BWL) and a long-wavelength orange light (OL) vs. a control condition of dim light on subjective, physiological and behavioral measures at 21:45 h. Neurobehavioral tests included the Karolinska Sleepiness Scale and subjective mood scale, recording of distal-proximal temperature gradient (DPG, as index of physiological arousal), accuracy in simulated driving and reaction time in the auditory psychomotor vigilance task. The results showed that BWL decreased the DPG (reflecting enhanced arousal), while it did not improve reaction time or driving performance. Instead, blue light produced larger driving errors than OL, while performance in OL was stable along time on task. These data suggest that physiological arousal induced by light does not necessarily imply cognitive improvement. Indeed, excessive arousal might deteriorate accuracy in complex tasks requiring precision, such as driving. PMID:28690558

  3. Did You Listen to the Beat? Auditory Steady-State Responses in the Human Electroencephalogram at 4 and 7 Hz Modulation Rates Reflect Selective Attention.

    PubMed

    Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan

    2018-02-27

    The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.

  4. Good Holders, Bad Shufflers: An Examination of Working Memory Processes and Modalities in Children with and without Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Simone, Ashley N; Bédard, Anne-Claude V; Marks, David J; Halperin, Jeffrey M

    2016-01-01

    The aim of this study was to examine working memory (WM) modalities (visual-spatial and auditory-verbal) and processes (maintenance and manipulation) in children with and without attention-deficit/hyperactivity disorder (ADHD). The sample consisted of 63 8-year-old children with ADHD and an age- and sex-matched non-ADHD comparison group (N=51). Auditory-verbal and visual-spatial WM were assessed using the Digit Span and Spatial Span subtests from the Wechsler Intelligence Scale for Children Integrated - Fourth Edition. WM maintenance and manipulation were assessed via forward and backward span indices, respectively. Data were analyzed using a 3-way Group (ADHD vs. non-ADHD)×Modality (Auditory-Verbal vs. Visual-Spatial)×Condition (Forward vs. Backward) Analysis of Variance (ANOVA). Secondary analyses examined differences between Combined and Predominantly Inattentive ADHD presentations. Significant Group×Condition (p=.02) and Group×Modality (p=.03) interactions indicated differentially poorer performance by those with ADHD on backward relative to forward and visual-spatial relative to auditory-verbal tasks, respectively. The 3-way interaction was not significant. Analyses targeting ADHD presentations yielded a significant Group×Condition interaction (p=.009) such that children with ADHD-Predominantly Inattentive Presentation performed differentially poorer on backward relative to forward tasks compared to the children with ADHD-Combined Presentation. Findings indicate a specific pattern of WM weaknesses (i.e., WM manipulation and visual-spatial tasks) for children with ADHD. Furthermore, differential patterns of WM performance were found for children with ADHD-Predominantly Inattentive versus Combined Presentations. (JINS, 2016, 22, 1-11).

  5. Brain activity during driving with distraction: an immersive fMRI study

    PubMed Central

    Schweizer, Tom A.; Kan, Karen; Hung, Yuwen; Tam, Fred; Naglie, Gary; Graham, Simon J.

    2013-01-01

    Introduction: Non-invasive measurements of brain activity have an important role to play in understanding driving ability. The current study aimed to identify the neural underpinnings of human driving behavior by visualizing the areas of the brain involved in driving under different levels of demand, such as driving while distracted or making left turns at busy intersections. Materials and Methods: To capture brain activity during driving, we placed a driving simulator with a fully functional steering wheel and pedals in a 3.0 Tesla functional magnetic resonance imaging (fMRI) system. To identify the brain areas involved while performing different real-world driving maneuvers, participants completed tasks ranging from simple (right turns) to more complex (left turns at busy intersections). To assess the effects of driving while distracted, participants were asked to perform an auditory task while driving analogous to speaking on a hands-free device and driving. Results: A widely distributed brain network was identified, especially when making left turns at busy intersections compared to more simple driving tasks. During distracted driving, brain activation shifted dramatically from the posterior, visual and spatial areas to the prefrontal cortex. Conclusions: Our findings suggest that the distracted brain sacrificed areas in the posterior brain important for visual attention and alertness to recruit enough brain resources to perform a secondary, cognitive task. The present findings offer important new insights into the scientific understanding of the neuro-cognitive mechanisms of driving behavior and lay down an important foundation for future clinical research. PMID:23450757

  6. Age-related changes in event-cued visual and auditory prospective memory proper.

    PubMed

    Uttl, Bob

    2006-06-01

    We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.

  7. Cross-modal extinction in a boy with severely autistic behaviour and high verbal intelligence.

    PubMed

    Bonneh, Yoram S; Belmonte, Matthew K; Pei, Francesca; Iversen, Portia E; Kenet, Tal; Akshoomoff, Natacha; Adini, Yael; Simon, Helen J; Moore, Christopher I; Houde, John F; Merzenich, Michael M

    2008-07-01

    Anecdotal reports from individuals with autism suggest a loss of awareness to stimuli from one modality in the presence of stimuli from another. Here we document such a case in a detailed study of A.M., a 13-year-old boy with autism in whom significant autistic behaviours are combined with an uneven IQ profile of superior verbal and low performance abilities. Although A.M.'s speech is often unintelligible, and his behaviour is dominated by motor stereotypies and impulsivity, he can communicate by typing or pointing independently within a letter board. A series of experiments using simple and highly salient visual, auditory, and tactile stimuli demonstrated a hierarchy of cross-modal extinction, in which auditory information extinguished other modalities at various levels of processing. A.M. also showed deficits in shifting and sustaining attention. These results provide evidence for monochannel perception in autism and suggest a general pattern of winner-takes-all processing in which a stronger stimulus-driven representation dominates behaviour, extinguishing weaker representations.

  8. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.

    PubMed

    Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.

  9. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    PubMed Central

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  10. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    PubMed

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

  11. SOUTHEAST SIDE, TAKEN FROM LOWER PARKING LOT, WITH ABUTTING FACILITY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    SOUTHEAST SIDE, TAKEN FROM LOWER PARKING LOT, WITH ABUTTING FACILITY 346 IN FOREGROUND. - U.S. Naval Base, Pearl Harbor, Joint Intelligence Center, Makalapa Drive in Makalapa Administration Area, Pearl City, Honolulu County, HI

  12. Driving safely into the future with applied technology

    DOT National Transportation Integrated Search

    1999-10-01

    Driver error remains the leading cause of highway crashes. Through the Intelligent Vehicle Initiative (IVI), the Department of Transportation hopes to reduce crashes by helping drivers avoid hazardous mistakes. IVI aims to accelerate the development ...

  13. Convergent evolution of complex brains and high intelligence

    PubMed Central

    Roth, Gerhard

    2015-01-01

    Within the animal kingdom, complex brains and high intelligence have evolved several to many times independently, e.g. among ecdysozoans in some groups of insects (e.g. blattoid, dipteran, hymenopteran taxa), among lophotrochozoans in octopodid molluscs, among vertebrates in teleosts (e.g. cichlids), corvid and psittacid birds, and cetaceans, elephants and primates. High levels of intelligence are invariantly bound to multimodal centres such as the mushroom bodies in insects, the vertical lobe in octopodids, the pallium in birds and the cerebral cortex in primates, all of which contain highly ordered associative neuronal networks. The driving forces for high intelligence may vary among the mentioned taxa, e.g. needs for spatial learning and foraging strategies in insects and cephalopods, for social learning in cichlids, instrumental learning and spatial orientation in birds and social as well as instrumental learning in primates. PMID:26554042

  14. Intelligent systems installed in building of research centre for research purposes

    NASA Astrophysics Data System (ADS)

    Matusov, Jozef; Mokry, Marian; Kolkova, Zuzana; Sedivy, Stefan

    2016-06-01

    The attractiveness of intelligent buildings is nowadays directly connected with higher level of comfort and also the economic mode of consumption energy for heating, cooling and the total consumption of electricity for electric devices. The technologies of intelligent buildings compared with conventional solutions allow dynamic optimization in real time and make it easy for operational message. The basic division of functionality in horizontal direction is possible divide in to two areas such as Economical sophisticated residential care about the comfort of people in the building and Security features. The paper deals with description of intelligent systems which has a building of Research Centre. The building has installed the latest technology for utilization of renewable energy and also latest systems of controlling and driving all devices which contribute for economy operation by achieving the highest thermal comfort and overall safety.

  15. The effect of compression and attention allocation on speech intelligibility. II

    NASA Astrophysics Data System (ADS)

    Choi, Sangsook; Carrell, Thomas

    2004-05-01

    Previous investigations of the effects of amplitude compression on measures of speech intelligibility have shown inconsistent results. Recently, a novel paradigm was used to investigate the possibility of more consistent findings with a measure of speech perception that is not based entirely on intelligibility (Choi and Carrell, 2003). That study exploited a dual-task paradigm using a pursuit rotor online visual-motor tracking task (Dlhopolsky, 2000) along with a word repetition task. Intensity-compressed words caused reduced performance on the tracking task as compared to uncompressed words when subjects engaged in a simultaneous word repetition task. This suggested an increased cognitive load when listeners processed compressed words. A stronger result might be obtained if a single resource (linguistic) is required rather than two (linguistic and visual-motor) resources. In the present experiment a visual lexical decision task and an auditory word repetition task were used. The visual stimuli for the lexical decision task were blurred and presented in a noise background. The compressed and uncompressed words for repetition were placed in speech-shaped noise. Participants with normal hearing and vision conducted word repetition and lexical decision tasks both independently and simultaneously. The pattern of results is discussed and compared to the previous study.

  16. An Integrated Architecture for Grounded Intelligence in Its Development, Experimental, Environmental, and Social Context

    DTIC Science & Technology

    2007-05-01

    supervisory system lie core drives, such as hunger , boredom, attention-seeking, and other domain-specific drives (such as task success), modeled as scalar...the control of routing activities. Cognitive Neuropsychology , 17:297-338. [Davies and Stone, 1995] Davies, M. and Stone, T. (1995). Introduction. In...Thornton, I., J., P., and Shiffrar, M. (1998). The visual perception of human locomotion. Cognitive Neuropsychology , 15:535-552. [Wilson, 2001] Wilson

  17. The role of cognitive versus emotional intelligence in Iowa Gambling Task performance: What's emotion got to do with it?

    PubMed

    Webb, Christian A; DelDonno, Sophie; Killgore, William D S

    2014-01-01

    Debate persists regarding the relative role of cognitive versus emotional processes in driving successful performance on the widely used Iowa Gambling Task (IGT). From the time of its initial development, patterns of IGT performance were commonly interpreted as primarily reflecting implicit, emotion-based processes. Surprisingly, little research has tried to directly compare the extent to which measures tapping relevant cognitive versus emotional competencies predict IGT performance in the same study. The current investigation attempts to address this question by comparing patterns of associations between IGT performance, cognitive intelligence (Wechsler Abbreviated Scale of Intelligence; WASI) and three commonly employed measures of emotional intelligence (EI; Mayer-Salovey-Caruso Emotional Intelligence Test, MSCEIT; Bar-On Emotional Quotient Inventory, EQ-i; Self-Rated Emotional Intelligence Scale, SREIS). Results indicated that IGT performance was more strongly associated with cognitive, than emotional, intelligence. To the extent that the IGT indeed mimics "real-world" decision-making, our findings, coupled with the results of existing research, may highlight the role of deliberate, cognitive capacities over implicit, emotional processes in contributing to at least some domains of decision-making relevant to everyday life.

  18. The role of cognitive versus emotional intelligence in Iowa Gambling Task performance: What’s emotion got to do with it?

    PubMed Central

    Webb, Christian A.; DelDonno, Sophie; Killgore, William D.S.

    2014-01-01

    Debate persists regarding the relative role of cognitive versus emotional processes in driving successful performance on the widely used Iowa Gambling Task (IGT). From the time of its initial development, patterns of IGT performance were commonly interpreted as primarily reflecting implicit, emotion-based processes. Surprisingly, little research has tried to directly compare the extent to which measures tapping relevant cognitive versus emotional competencies predict IGT performance in the same study. The current investigation attempts to address this question by comparing patterns of associations between IGT performance, cognitive intelligence (Wechsler Abbreviated Scale of Intelligence; WASI) and three commonly employed measures of emotional intelligence (EI; Mayer–Salovey–Caruso Emotional Intelligence Test, MSCEIT; Bar-On Emotional Quotient Inventory, EQ-i; Self-Rated Emotional Intelligence Scale, SREIS). Results indicated that IGT performance was more strongly associated with cognitive, than emotional, intelligence. To the extent that the IGT indeed mimics “real-world” decision-making, our findings, coupled with the results of existing research, may highlight the role of deliberate, cognitive capacities over implicit, emotional processes in contributing to at least some domains of decision-making relevant to everyday life. PMID:25635149

  19. Experimental research of flow servo-valve

    NASA Astrophysics Data System (ADS)

    Takosoglu, Jakub

    Positional control of pneumatic drives is particularly important in pneumatic systems. Some methods of positioning pneumatic cylinders for changeover and tracking control are known. Choking method is the most development-oriented and has the greatest potential. An optimal and effective method, particularly when applied to pneumatic drives, has been searched for a long time. Sophisticated control systems with algorithms utilizing artificial intelligence methods are designed therefor. In order to design the control algorithm, knowledge about real parameters of servo-valves used in control systems of electro-pneumatic servo-drives is required. The paper presents the experimental research of flow servo-valve.

  20. Effects on driving performance of interacting with an in-vehicle music player: a comparison of three interface layout concepts for information presentation.

    PubMed

    Mitsopoulos-Rubens, Eve; Trotter, Margaret J; Lenné, Michael G

    2011-05-01

    Interface design is an important factor in assessing the potential effects on safety of interacting with an in-vehicle information system while driving. In the current study, the layout of information on a visual display was manipulated to explore its effect on driving performance in the context of music selection. The comparative effects of an auditory-verbal (cognitive) task were also explored. The driving performance of 30 participants was assessed under both baseline and dual task conditions using the Lane Change Test. Concurrent completion of the music selection task with driving resulted in significant impairment to lateral driving performance (mean lane deviation and percentage of correct lane changes) relative to the baseline, and significantly greater mean lane deviation relative to the combined driving and the cognitive task condition. The magnitude of these effects on driving performance was independent of layout concept, although significant differences in subjective workload estimates and performance on the music selection task across layout concepts highlights that potential uncertainty regarding design use as conveyed through layout concept could be disadvantageous. The implications of these results for interface design and safety are discussed. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Development of a Pitch Discrimination Screening Test for Preschool Children.

    PubMed

    Abramson, Maria Kulick; Lloyd, Peter J

    2016-04-01

    There is a critical need for tests of auditory discrimination for young children as this skill plays a fundamental role in the development of speaking, prereading, reading, language, and more complex auditory processes. Frequency discrimination is important with regard to basic sensory processing affecting phonological processing, dyslexia, measurements of intelligence, auditory memory, Asperger syndrome, and specific language impairment. This study was performed to determine the clinical feasibility of the Pitch Discrimination Test (PDT) to screen the preschool child's ability to discriminate some of the acoustic demands of speech perception, primarily pitch discrimination, without linguistic content. The PDT used brief speech frequency tones to gather normative data from preschool children aged 3 to 5 yrs. A cross-sectional study was used to gather data regarding the pitch discrimination abilities of a sample of typically developing preschool children, between 3 and 5 yrs of age. The PDT consists of ten trials using two pure tones of 100-msec duration each, and was administered in an AA or AB forced-choice response format. Data from 90 typically developing preschool children between the ages of 3 and 5 yrs were used to provide normative data. Nonparametric Mann-Whitney U-testing was used to examine the effects of age as a continuous variable on pitch discrimination. The Kruskal-Wallis test was used to determine the significance of age on performance on the PDT. Spearman rank was used to determine the correlation of age and performance on the PDT. Pitch discrimination of brief tones improved significantly from age 3 yrs to age 4 yrs, as well as from age 3 yrs to the age 4- and 5-yrs group. Results indicated that between ages 3 and 4 yrs, children's auditory discrimination of pitch improved on the PDT. The data showed that children can be screened for auditory discrimination of pitch beginning with age 4 yrs. The PDT proved to be a time efficient, feasible tool for a simple form of frequency discrimination screening in the preschool population before the age where other diagnostic tests of auditory processing disorders can be used. American Academy of Audiology.

  2. Same or different? Clarifying the relationship of need for cognition to personality and intelligence.

    PubMed

    Fleischhauer, Monika; Enge, Sören; Brocke, Burkhard; Ullrich, Johannes; Strobel, Alexander; Strobel, Anja

    2010-01-01

    Need for cognition (NFC) refers to an individual's tendency to engage in and enjoy effortful cognitive processing. So far, little attention has been paid to a systematic evaluation of the distinctiveness of NFC from traits with similar conceptualization and from intelligence. The present research contributes to filling this gap by examining the relation of NFC to well-established personality concepts (Study 1) and to a comprehensive measure of intelligence in a sample with broad educational backgrounds (Study 2). We observed NFC to be positively correlated with openness, emotional stability, and traits indicating goal orientation. Using confirmatory factor analysis and event-related potentials, incremental validity of NFC and openness to ideas was demonstrated, showing that NFC is more predictive of drive-related and goal-oriented behavior and attentional resource allocation. Regarding intelligence, NFC was more associated with fluid than with crystallized aspects of intelligence. Altogether, the results provide strong support for the conceptual autonomy of NFC.

  3. Expanding the phenotypic profile of Kleefstra syndrome: A female with low-average intelligence and childhood apraxia of speech.

    PubMed

    Samango-Sprouse, Carole; Lawson, Patrick; Sprouse, Courtney; Stapleton, Emily; Sadeghin, Teresa; Gropman, Andrea

    2016-05-01

    Kleefstra syndrome (KS) is a rare neurogenetic disorder most commonly caused by deletion in the 9q34.3 chromosomal region and is associated with intellectual disabilities, severe speech delay, and motor planning deficits. To our knowledge, this is the first patient (PQ, a 6-year-old female) with a 9q34.3 deletion who has near normal intelligence, and developmental dyspraxia with childhood apraxia of speech (CAS). At 6, the Wechsler Preschool and Primary Intelligence testing (WPPSI-III) revealed a Verbal IQ of 81 and Performance IQ of 79. The Beery Buktenica Test of Visual Motor Integration, 5th Edition (VMI) indicated severe visual motor deficits: VMI = 51; Visual Perception = 48; Motor Coordination < 45. On the Receptive One Word Picture Vocabulary Test-R (ROWPVT-R), she had standard scores of 96 and 99 in contrast to an Expressive One Word Picture Vocabulary-R (EOWPVT-R) standard scores of 73 and 82, revealing a discrepancy in vocabulary domains on both evaluations. Preschool Language Scale-4 (PLS-4) on PQ's first evaluation reveals a significant difference between auditory comprehension and expressive communication with standard scores of 78 and 57, respectively, further supporting the presence of CAS. This patient's near normal intelligence expands the phenotypic profile as well as the prognosis associated with KS. The identification of CAS in this patient provides a novel explanation for the previously reported speech delay and expressive language disorder. Further research is warranted on the impact of CAS on intelligence and behavioral outcome in KS. Therapeutic and prognostic implications are discussed. © 2016 Wiley Periodicals, Inc.

  4. OBLIQUE SHOWING NORTHEAST END AND NORTHWEST SIDE. FACILITY 252 PORTION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE SHOWING NORTHEAST END AND NORTHWEST SIDE. FACILITY 252 PORTION OF BUILDING IS ON LEFT. - U.S. Naval Base, Pearl Harbor, Combat Intelligence Center, Makalapa Drive in Makalapa Administration Area, Pearl City, Honolulu County, HI

  5. OBLIQUE OF SOUTHWEST END AND SOUTHEAST SIDE, WITH ADJACENT FACILITY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE OF SOUTHWEST END AND SOUTHEAST SIDE, WITH ADJACENT FACILITY 391 IN THE FOREGROUND. - U.S. Naval Base, Pearl Harbor, Joint Intelligence Center, Makalapa Drive in Makalapa Administration Area, Pearl City, Honolulu County, HI

  6. OBLIQUE OF THE NORTHEAST END (MAIN ENTRY) AND NORTHWEST SIDE, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE OF THE NORTHEAST END (MAIN ENTRY) AND NORTHWEST SIDE, WITH FACILITY 346 ON LEFT. - U.S. Naval Base, Pearl Harbor, Joint Intelligence Center, Makalapa Drive in Makalapa Administration Area, Pearl City, Honolulu County, HI

  7. Analysis of older driver safety interventions : a human factors taxonomic approach

    DOT National Transportation Integrated Search

    1999-03-01

    The careful application of human factors design principles and guidelines is integral to : the development of safe, efficient and usable Intelligent Transportation Systems (ITS). One : segment of the driving population that may significantly benefit ...

  8. Revolutionary Propulsion Systems for 21st Century Aviation

    NASA Technical Reports Server (NTRS)

    Sehra, Arun K.; Shin, Jaiwon

    2003-01-01

    The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA's vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and Knox). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines, and the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions. This paper reviews future propulsion and power concepts that are currently under development at NASA Glenn Research Center.

  9. The RetroX auditory implant for high-frequency hearing loss.

    PubMed

    Garin, P; Genard, F; Galle, C; Jamart, J

    2004-07-01

    The objective of this study was to analyze the subjective satisfaction and measure the hearing gain provided by the RetroX (Auric GmbH, Rheine, Germany), an auditory implant of the external ear. We conducted a retrospective case review. We conducted this study at a tertiary referral center at a university hospital. We studied 10 adults with high-frequency sensori-neural hearing loss (ski-slope audiogram). The RetroX consists of an electronic unit sited in the postaural sulcus connected to a titanium tube implanted under the auricle between the sulcus and the entrance of the external auditory canal. Implanting requires only minor surgery under local anesthesia. Main outcome measures were a satisfaction questionnaire, pure-tone audiometry in quiet, speech audiometry in quiet, speech audiometry in noise, and azimuth audiometry (hearing threshold in function of sound source location within the horizontal plane at ear level). : Subjectively, all 10 patients are satisfied or even extremely satisfied with the hearing improvement provided by the RetroX. They wear the implant daily, from morning to evening. We observe a statistically significant improvement of pure-tone thresholds at 1, 2, and 4 kHz. In quiet, the speech reception threshold improves by 9 dB. Speech audiometry in noise shows that intelligibility improves by 26% for a signal-to-noise ratio of -5 dB, by 18% for a signal-to-noise ratio of 0 dB, and by 13% for a signal-to-noise ratio of +5 dB. Localization audiometry indicates that the skull masks sound contralateral to the implanted ear. Of the 10 patients, one had acoustic feedback and one presented with a granulomatous reaction to the foreign body that necessitated removing the implant. The RetroX auditory implant is a semi-implantable hearing aid without occlusion of the external auditory canal. It provides a new therapeutic alternative for managing high-frequency hearing loss.

  10. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. The influence of music on mental effort and driving performance.

    PubMed

    Ünal, Ayça Berfu; Steg, Linda; Epstude, Kai

    2012-09-01

    The current research examined the influence of loud music on driving performance, and whether mental effort mediated this effect. Participants (N=69) drove in a driving simulator either with or without listening to music. In order to test whether music would have similar effects on driving performance in different situations, we manipulated the simulated traffic environment such that the driving context consisted of both complex and monotonous driving situations. In addition, we systematically kept track of drivers' mental load by making the participants verbally report their mental effort at certain moments while driving. We found that listening to music increased mental effort while driving, irrespective of the driving situation being complex or monotonous, providing support to the general assumption that music can be a distracting auditory stimulus while driving. However, drivers who listened to music performed as well as the drivers who did not listen to music, indicating that music did not impair their driving performance. Importantly, the increases in mental effort while listening to music pointed out that drivers try to regulate their mental effort as a cognitive compensatory strategy to deal with task demands. Interestingly, we observed significant improvements in driving performance in two of the driving situations. It seems like mental effort might mediate the effect of music on driving performance in situations requiring sustained attention. Other process variables, such as arousal and boredom, should also be incorporated to study designs in order to reveal more on the nature of how music affects driving. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Long-term neurocognitive outcome and auditory event-related potentials after complex febrile seizures in children.

    PubMed

    Tsai, Min-Lan; Hung, Kun-Long; Tsan, Ying-Ying; Tung, William Tao-Hsin

    2015-06-01

    Whether prolonged or complex febrile seizures (FS) produce long-term injury to the hippocampus is a critical question concerning the neurocognitive outcome of these seizures. Long-term event-related evoked potential (ERP) recording from the scalp is a noninvasive technique reflecting the sensory and cognitive processes associated with attention tasks. This study aimed to investigate the long-term outcome of neurocognitive and attention functions and evaluated auditory event-related potentials in children who have experienced complex FS in comparison with other types of FS. One hundred and forty-seven children aged more than 6 years who had experienced complex FS, simple single FS, simple recurrent FS, or afebrile seizures (AFS) after FS and age-matched healthy controls were enrolled. Patients were evaluated with Wechsler Intelligence Scale for Children (WISC; Chinese WISC-IV) scores, behavior test scores (Chinese version of Conners' continuous performance test, CPT II V.5), and behavior rating scales. Auditory ERPs were recorded in each patient. Patients who had experienced complex FS exhibited significantly lower full-scale intelligence quotient (FSIQ), perceptual reasoning index, and working memory index scores than did the control group but did not show significant differences in CPT scores, behavior rating scales, or ERP latencies and amplitude compared with the other groups with FS. We found a significant decrease in the FSIQ and four indices of the WISC-IV, higher behavior rating scales, a trend of increased CPT II scores, and significantly delayed P300 latency and reduced P300 amplitude in the patients with AFS after FS. We conclude that there is an effect on cognitive function in children who have experienced complex FS and patients who developed AFS after FS. The results indicated that the WISC-IV is more sensitive in detecting cognitive abnormality than ERP. Cognition impairment, including perceptual reasoning and working memory defects, was identified in patients with prolonged, multiple, or focal FS. These results may have implications for the pathogenesis of complex FS. Further comprehensive psychological evaluation and educational programs are suggested. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Speech processing: from peripheral to hemispheric asymmetry of the auditory system.

    PubMed

    Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier

    2012-01-01

    Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  14. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  15. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  16. Design and validation of an intelligent wheelchair towards a clinically-functional outcome

    PubMed Central

    2013-01-01

    Background Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. Methods The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. Results User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. Conclusions The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode. PMID:23773851

  17. Assessing underwater noise levels during pile-driving at an offshore windfarm and its potential effects on marine mammals.

    PubMed

    Bailey, Helen; Senior, Bridget; Simmons, Dave; Rusin, Jan; Picken, Gordon; Thompson, Paul M

    2010-06-01

    Marine renewable developments have raised concerns over impacts of underwater noise on marine species, particularly from pile-driving for wind turbines. Environmental assessments typically use generic sound propagation models, but empirical tests of these models are lacking. In 2006, two 5MW wind turbines were installed off NE Scotland. The turbines were in deep (>40m) water, 25km from the Moray Firth Special Area of Conservation (SAC), potentially affecting a protected population of bottlenose dolphins. We measured pile-driving noise at distances of 0.1 (maximum broadband peak to peak sound level 205dB re 1microPa) to 80km (no longer distinguishable above background noise). These sound levels were related to noise exposure criteria for marine mammals to assess possible effects. For bottlenose dolphins, auditory injury would only have occurred within 100m of the pile-driving and behavioural disturbance, defined as modifications in behaviour, could have occurred up to 50km away.

  18. Convergent evolution of complex brains and high intelligence.

    PubMed

    Roth, Gerhard

    2015-12-19

    Within the animal kingdom, complex brains and high intelligence have evolved several to many times independently, e.g. among ecdysozoans in some groups of insects (e.g. blattoid, dipteran, hymenopteran taxa), among lophotrochozoans in octopodid molluscs, among vertebrates in teleosts (e.g. cichlids), corvid and psittacid birds, and cetaceans, elephants and primates. High levels of intelligence are invariantly bound to multimodal centres such as the mushroom bodies in insects, the vertical lobe in octopodids, the pallium in birds and the cerebral cortex in primates, all of which contain highly ordered associative neuronal networks. The driving forces for high intelligence may vary among the mentioned taxa, e.g. needs for spatial learning and foraging strategies in insects and cephalopods, for social learning in cichlids, instrumental learning and spatial orientation in birds and social as well as instrumental learning in primates. © 2015 The Author(s).

  19. Neuropsychological assessment of driving ability and self-evaluation: a comparison between driving offenders and a control group.

    PubMed

    Zingg, Christina; Puelschen, Dietrich; Soyka, Michael

    2009-12-01

    The relationship between performance in neuropsychological tests and actual driving performance is unclear and results of studies on this topic differ. This makes it difficult to use neuropsychological tests to assess driving ability. The ability to compensate cognitive deficits plays a crucial role in this context. We compared neuropsychological test results and self-evaluation ratings between three groups: driving offenders with a psychiatric diagnosis relevant for driving ability (mainly alcohol dependence), driving offenders without such a diagnosis and a control group of non-offending drivers. Subjects were divided into two age categories (19-39 and 40-66 years). It was assumed that drivers with a psychiatric diagnosis relevant for driving ability and younger driving offenders without a psychiatric diagnosis would be less able to adequately assess their own capabilities than the control group. The driving offenders with a psychiatric diagnosis showed poorer concentration, reactivity, cognitive flexibility and problem solving, and tended to overassess their abilities in intelligence and attentional functions, compared to the other two groups. Conversely, younger drivers rather underassessed their performance.

  20. OBLIQUE OF NORTHEAST END WITH FACILITY 252 PORTION OF BUILDING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE OF NORTHEAST END WITH FACILITY 252 PORTION OF BUILDING (FIRST-FLOOR CONCRETE PORTION) IN FOREGROUND. - U.S. Naval Base, Pearl Harbor, Combat Intelligence Center, Makalapa Drive in Makalapa Administration Area, Pearl City, Honolulu County, HI

  1. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    PubMed

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.

  3. Toward a Nonspeech Test of Auditory Cognition: Semantic Context Effects in Environmental Sound Identification in Adults of Varying Age and Hearing Abilities

    PubMed Central

    Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian

    2016-01-01

    Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791

  4. An intelligent tutoring system for the investigation of high performance skill acquisition

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley

    1991-01-01

    The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.

  5. Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2018-04-25

    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.

  6. Is Intelligent Speed Adaptation ready for deployment?

    PubMed

    Carsten, Oliver

    2012-09-01

    There have been 30 years of research on Intelligent Speed Adaptation (ISA), the in-vehicle system that is designed to promote compliance with speed limits. Extensive trials of ISA in real-world driving have shown that ISA can significantly reduce speeding, users have been found to have generally positive attitudes and at least some sections of the public have been shown to be willing to purchase ISA systems. Yet large-scale deployment of a system that could deliver huge accident reductions is still by no means guaranteed. Copyright © 2012. Published by Elsevier Ltd.

  7. Concept of Operations for Integrated Intelligent Flight Deck Displays and Decision Support Technologies

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Prinzel, Lawrence J.; Kramer, Lynda J.; Young, Steve D.

    2011-01-01

    The document describes a Concept of Operations for Flight Deck Display and Decision Support technologies which may help enable emerging Next Generation Air Transportation System capabilities while also maintaining, or improving upon, flight safety. This concept of operations is used as the driving function within a spiral program of research, development, test, and evaluation for the Integrated Intelligent Flight Deck (IIFD) project. As such, the concept will be updated at each cycle within the spiral to reflect the latest research results and emerging developments

  8. DETAIL OF EAVES AND HOODS OVER WINDOWS ON NORTHEAST END ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL OF EAVES AND HOODS OVER WINDOWS ON NORTHEAST END OF NORTHWEST SIDE, WITH SEABEE STATUE IN BACKGROUND. - U.S. Naval Base, Pearl Harbor, Joint Intelligence Center, Makalapa Drive in Makalapa Administration Area, Pearl City, Honolulu County, HI

  9. The 13 th Annual Intelligent Ground Vehicle Competition: intelligent ground vehicles created by intelligent teams

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.

    2005-10-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 13 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 50 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.

  10. The twelfth annual Intelligent Ground Vehicle Competition: team approaches to intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.; Maslach, Daniel

    2004-10-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Both U.S. and international teams focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 12 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 43 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.

  11. Pairing Increases Activation of V1aR, but not OTR, in Auditory Regions of Zebra Finches: The Importance of Signal Modality in Nonapeptide-Social Behavior Relationships.

    PubMed

    Tomaszycki, Michelle L; Atchley, Derek

    2017-10-01

    Social relationships are complex, involving the production and comprehension of signals, individual recognition, and close coordination of behavior between two or more individuals. The nonapeptides oxytocin and vasopressin are widely believed to regulate social relationships. These findings come largely from prairie voles, in which nonapeptide receptors in olfactory neural circuits drive pair bonding. This research is assumed to apply to all species. Previous reviews have offered two competing hypotheses. The work of Sarah Newman has implicated a common neural network across species, the Social Behavior Network. In contrast, others have suggested that there are signal modality-specific networks that regulate social behavior. Our research focuses on evaluating these two competing hypotheses in the zebra finch, a species that relies heavily on vocal/auditory signals for communication, specifically the neural circuits underlying singing in males and song perception in females. We have demonstrated that the quality of vocal interactions is highly important for the formation of long-term monogamous bonds in zebra finches. Qualitative evidence at first suggests that nonapeptide receptor distributions are very different between monogamous rodents (olfactory species) and monogamous birds (vocal/auditory species). However, we have demonstrated that social bonding behaviors are not only correlated with activation of nonapeptide receptors in vocal and auditory circuits, but also involve regions of the common Social Behavior Network. Here, we show increased Vasopressin 1a receptor, but not oxytocin receptor, activation in two auditory regions following formation of a pair bond. To our knowledge, this is the first study to suggest a role of nonapeptides in the auditory circuit in pair bonding. Thus, we highlight converging mechanisms of social relationships and also point to the importance of studying multiple species to understand mechanisms of behavior. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  12. Sensitivity of subjective questionnaires to cognitive loading while driving with navigation aids: a pilot study.

    PubMed

    Smyth, Christopher C

    2007-05-01

    Developers of future forces are implementing automated aiding for driving tasks. In designing such systems, the effect of cognitive task interference on driving performance is important. The crew of such vehicles may have to occasionally perform communication and planning tasks while driving. Subjective questionnaires may aid researchers to parse out the sources of task interference in crew station designs. In this preliminary study, sixteen participants drove a vehicle simulator with automated road-turn cues (i.e., visual, audio, combined, or neither) along a course marked on a map display while replying to spoken test questions (i.e., repeating sentences, math and logical puzzles, route planning, or none) and reporting other vehicles in the scenario. Following each trial, a battery of subjective questionnaires was administered to determine the perceived effects of the loading on their cognitive functionality. Considering the performance, the participants drove significantly faster with the road-turn cues than with just the map. They recalled fewer vehicle sightings with the cognitive tests than without them. Questionnaire results showed that their reasoning was more straightforward, the quantity of information for understanding higher, and the trust greater with the combined cues than the map-only. They reported higher perceived workload with the cognitive tests. The capacity for maintaining situational awareness was reduced with the cognitive tests because of the increased division of attention and the increase in the instability, variability, and complexity of the demands. The association and intuitiveness of cognitive processing were lowest and the subjective stress highest for the route planning test. Finally, the confusability in reasoning was greater for the auditory cue with the route planning than the auditory cue without the cognitive tests. The subjective questionnaires are sensitive to the effects of the cognitive loading and, therefore, may be useful for guiding the development of automated aid designs.

  13. The effect of compression speed on intelligibility: simulated hearing-aid processing with and without original temporal fine structure information.

    PubMed

    Hopkins, Kathryn; King, Andrew; Moore, Brian C J

    2012-09-01

    Hearing aids use amplitude compression to compensate for the effects of loudness recruitment. The compression speed that gives the best speech intelligibility varies among individuals. Moore [(2008). Trends Amplif. 12, 300-315] suggested that an individual's sensitivity to temporal fine structure (TFS) information may affect which compression speed gives most benefit. This hypothesis was tested using normal-hearing listeners with a simulated hearing loss. Sentences in a competing talker background were processed using multi-channel fast or slow compression followed by a simulation of threshold elevation and loudness recruitment. Signals were either tone vocoded with 1-ERB(N)-wide channels (where ERB(N) is the bandwidth of normal auditory filters) to remove the original TFS information, or not processed further. In a second experiment, signals were vocoded with either 1 - or 2-ERB(N)-wide channels, to test whether the available spectral detail affects the optimal compression speed. Intelligibility was significantly better for fast than slow compression regardless of vocoder channel bandwidth. The results suggest that the availability of original TFS or detailed spectral information does not affect the optimal compression speed. This conclusion is tentative, since while the vocoder processing removed the original TFS information, listeners may have used the altered TFS in the vocoded signals.

  14. Examining explanations for fundamental frequency's contribution to speech intelligibility in noise

    NASA Astrophysics Data System (ADS)

    Schlauch, Robert S.; Miller, Sharon E.; Watson, Peter J.

    2005-09-01

    Laures and Weismer [JSLHR, 42, 1148 (1999)] reported that speech with natural variation in fundamental frequency (F0) is more intelligible in noise than speech with a flattened F0 contour. Cognitive-linguistic based explanations have been offered to account for this drop in intelligibility for the flattened condition, but a lower-level mechanism related to auditory streaming may be responsible. Numerous psychoacoustic studies have demonstrated that modulating a tone enables a listener to segregate it from background sounds. To test these rival hypotheses, speech recognition in noise was measured for sentences with six different F0 contours: unmodified, flattened at the mean, natural but exaggerated, reversed, and frequency modulated (rates of 2.5 and 5.0 Hz). The 180 stimulus sentences were produced by five talkers (30 sentences per condition). Speech recognition for fifteen listeners replicate earlier findings showing that flattening the F0 contour results in a roughly 10% reduction in recognition of key words compared with the natural condition. Although the exaggerated condition produced results comparable to those of the flattened condition, the other conditions with unnatural F0 contours all yielded significantly poorer performance than the flattened condition. These results support the cognitive, linguistic-based explanations for the reduction in performance.

  15. Speech intelligibility in complex acoustic environments in young children

    NASA Astrophysics Data System (ADS)

    Litovsky, Ruth

    2003-04-01

    While the auditory system undergoes tremendous maturation during the first few years of life, it has become clear that in complex scenarios when multiple sounds occur and when echoes are present, children's performance is significantly worse than their adult counterparts. The ability of children (3-7 years of age) to understand speech in a simulated multi-talker environment and to benefit from spatial separation of the target and competing sounds was investigated. In these studies, competing sources vary in number, location, and content (speech, modulated or unmodulated speech-shaped noise and time-reversed speech). The acoustic spaces were also varied in size and amount of reverberation. Finally, children with chronic otitis media who received binaural training were tested pre- and post-training on a subset of conditions. Results indicated the following. (1) Children experienced significantly more masking than adults, even in the simplest conditions tested. (2) When the target and competing sounds were spatially separated speech intelligibility improved, but the amount varied with age, type of competing sound, and number of competitors. (3) In a large reverberant classroom there was no benefit of spatial separation. (4) Binaural training improved speech intelligibility performance in children with otitis media. Future work includes similar studies in children with unilateral and bilateral cochlear implants. [Work supported by NIDCD, DRF, and NOHR.

  16. The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception.

    PubMed

    Buchan, Julie N; Paré, Martin; Munhall, Kevin G

    2008-11-25

    During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.

  17. Neuromimetic Sound Representation for Percept Detection and Manipulation

    NASA Astrophysics Data System (ADS)

    Zotkin, Dmitry N.; Chi, Taishih; Shamma, Shihab A.; Duraiswami, Ramani

    2005-12-01

    The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at [InlineEquation not available: see fulltext.]). Work on bringing the algorithms into the real-time processing domain is ongoing.

  18. Intelligent Control for Drag Reduction on the X-48B Vehicle

    NASA Technical Reports Server (NTRS)

    Griffin, Brian Joseph; Brown, Nelson Andrew; Yoo, Seung Yeun

    2011-01-01

    This paper focuses on the development of an intelligent control technology for in-flight drag reduction. The system is integrated with and demonstrated on the full X-48B nonlinear simulation. The intelligent control system utilizes a peak-seeking control method implemented with a time-varying Kalman filter. Performance functional coordinate and magnitude measurements, or independent and dependent parameters respectively, are used by the Kalman filter to provide the system with gradient estimates of the designed performance function which is used to drive the system toward a local minimum in a steepestdescent approach. To ensure ease of integration and algorithm performance, a single-input single-output approach was chosen. The framework, specific implementation considerations, simulation results, and flight feasibility issues related to this platform are discussed.

  19. Practical and generalizable architecture for an intelligent tutoring system

    NASA Astrophysics Data System (ADS)

    Kaplan, Randy M.; Trenholm, Harriet

    1993-03-01

    In this paper we describe an intelligent tutoring system (ITS) called HYDRIVE (hydraulics interactive video experience). This system is built using several novel approaches to intelligent tutoring. The underlying rationale for HYDRIVE is based on the results of a cognitive task analysis. The reasoning component of the system makes extensive use of a hierarchical knowledge representation. Reasoning within the system is accomplished using a logic-based approach and is linked to a highly interactive interface using multimedia. The knowledge representation contains information that drives the multimedia elements of the system, and the reasoning components select the appropriate information to assess student knowledge or guide the student at any particular moment. As this system will be deployed throughout the Air Force maintenance function, the implementation platform is the IBM PC.

  20. Older Adults with Mild Cognitive Impairments Show Less Driving Errors after a Multiple Sessions Simulator Training Program but Do Not Exhibit Long Term Retention.

    PubMed

    Teasdale, Normand; Simoneau, Martin; Hudon, Lisa; Germain Robitaille, Mathieu; Moszkowicz, Thierry; Laurendeau, Denis; Bherer, Louis; Duchesne, Simon; Hudon, Carol

    2016-01-01

    The driving performance of individuals with mild cognitive impairment (MCI) is suboptimal when compared to healthy older adults. It is expected that the driving will worsen with the progression of the cognitive decline and thus, whether or not these individuals should continue to drive is a matter of debate. The aim of the study was to provide support to the claim that individuals with MCI can benefit from a training program and improve their overall driving performance in a driving simulator. Fifteen older drivers with MCI participated in five training sessions in a simulator (over a 21-day period) and in a 6-month recall session. During training, they received automated auditory feedback on their performance when an error was noted about various maneuvers known to be suboptimal in MCI individuals (for instance, weaving, omitting to indicate a lane change, to verify a blind spot, or to engage in a visual search before crossing an intersection). The number of errors was compiled for eight different maneuvers for all sessions. For the initial five sessions, a gradual and significant decrease in the number of errors was observed, indicating learning and safer driving. The level of performance, however, was not maintained at the 6-month recall session. Nevertheless, the initial learning observed opens up possibilities to undertake more regular interventions to maintain driving skills and safe driving in MCI individuals.

  1. Sustainable Mobility Initiative | Transportation Research | NREL

    Science.gov Websites

    optimize mobility and significantly reduce related energy consumption. This concept of an intelligent measures to explore these technologies' effects on transportation energy use, emissions, and overall system . Efficient driving with smoother starts, stops, and accelerations to reduce energy consumption and

  2. Automated feedback to foster safe driving in young drivers : Phase 2.

    DOT National Transportation Integrated Search

    2015-12-01

    Intelligent Speed Adaptation (ISA) represents a promising approach to reduce speeding. A core principle for ISA systems is that they provide real-time feedback to drivers, prompting them to reduce speed when some threshold at or above the limit is re...

  3. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network.

    PubMed

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R

    2016-08-15

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors.

  4. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network

    PubMed Central

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R.

    2016-01-01

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors. PMID:27537878

  5. Smart sensorless prediction diagnosis of electric drives

    NASA Astrophysics Data System (ADS)

    Kruglova, TN; Glebov, NA; Shoshiashvili, ME

    2017-10-01

    In this paper, the discuss diagnostic method and prediction of the technical condition of an electrical motor using artificial intelligent method, based on the combination of fuzzy logic and neural networks, are discussed. The fuzzy sub-model determines the degree of development of each fault. The neural network determines the state of the object as a whole and the number of serviceable work periods for motors actuator. The combination of advanced techniques reduces the learning time and increases the forecasting accuracy. The experimental implementation of the method for electric drive diagnosis and associated equipment is carried out at different speeds. As a result, it was found that this method allows troubleshooting the drive at any given speed.

  6. Auditory cortex controls sound-driven innate defense behaviour through corticofugal projections to inferior colliculus.

    PubMed

    Xiong, Xiaorui R; Liang, Feixue; Zingg, Brian; Ji, Xu-ying; Ibrahim, Leena A; Tao, Huizhong W; Zhang, Li I

    2015-06-11

    Defense against environmental threats is essential for animal survival. However, the neural circuits responsible for transforming unconditioned sensory stimuli and generating defensive behaviours remain largely unclear. Here, we show that corticofugal neurons in the auditory cortex (ACx) targeting the inferior colliculus (IC) mediate an innate, sound-induced flight behaviour. Optogenetic activation of these neurons, or their projection terminals in the IC, is sufficient for initiating flight responses, while the inhibition of these projections reduces sound-induced flight responses. Corticocollicular axons monosynaptically innervate neurons in the cortex of the IC (ICx), and optogenetic activation of the projections from the ICx to the dorsal periaqueductal gray is sufficient for provoking flight behaviours. Our results suggest that ACx can both amplify innate acoustic-motor responses and directly drive flight behaviours in the absence of sound input through corticocollicular projections to ICx. Such corticofugal control may be a general feature of innate defense circuits across sensory modalities.

  7. Identifying musical pieces from fMRI data using encoding and decoding models.

    PubMed

    Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge

    2018-02-02

    Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

  8. A 20-channel magnetoencephalography system based on optically pumped magnetometers

    NASA Astrophysics Data System (ADS)

    Borna, Amir; Carter, Tony R.; Goldberg, Josh D.; Colombo, Anthony P.; Jau, Yuan-Yu; Berry, Christopher; McKay, Jim; Stephen, Julia; Weisend, Michael; Schwindt, Peter D. D.

    2017-12-01

    We describe a multichannel magnetoencephalography (MEG) system that uses optically pumped magnetometers (OPMs) to sense the magnetic fields of the human brain. The system consists of an array of 20 OPM channels conforming to the human subject’s head, a person-sized magnetic shield containing the array and the human subject, a laser system to drive the OPM array, and various control and data acquisition systems. We conducted two MEG experiments: auditory evoked magnetic field and somatosensory evoked magnetic field, on three healthy male subjects, using both our OPM array and a 306-channel Elekta-Neuromag superconducting quantum interference device (SQUID) MEG system. The described OPM array measures the tangential components of the magnetic field as opposed to the radial component measured by most SQUID-based MEG systems. Herein, we compare the results of the OPM- and SQUID-based MEG systems on the auditory and somatosensory data recorded in the same individuals on both systems.

  9. [Chinese medicine industry 4.0:advancing digital pharmaceutical manufacture toward intelligent pharmaceutical manufacture].

    PubMed

    Cheng, Yi-Yu; Qu, Hai-Bin; Zhang, Bo-Li

    2016-01-01

    A perspective analysis on the technological innovation in pharmaceutical engineering of Chinese medicine unveils a vision on "Future Factory" of Chinese medicine industry in mind. The strategy as well as the technical roadmap of "Chinese medicine industry 4.0" is proposed, with the projection of related core technology system. It is clarified that the technical development path of Chinese medicine industry from digital manufacture to intelligent manufacture. On the basis of precisely defining technical terms such as process control, on-line detection and process quality monitoring for Chinese medicine manufacture, the technical concepts and characteristics of intelligent pharmaceutical manufacture as well as digital pharmaceutical manufacture are elaborated. Promoting wide applications of digital manufacturing technology of Chinese medicine is strongly recommended. Through completely informationized manufacturing processes and multi-discipline cluster innovation, intelligent manufacturing technology of Chinese medicine should be developed, which would provide a new driving force for Chinese medicine industry in technology upgrade, product quality enhancement and efficiency improvement. Copyright© by the Chinese Pharmaceutical Association.

  10. Big cats as a model system for the study of the evolution of intelligence.

    PubMed

    Borrego, Natalia

    2017-08-01

    Currently, carnivores, and felids in particular, are vastly underrepresented in cognitive literature, despite being an ideal model system for tests of social and ecological intelligence hypotheses. Within Felidae, big cats (Panthera) are uniquely suited to studies investigating the evolutionary links between social, ecological, and cognitive complexity. Intelligence likely did not evolve in a unitary way but instead evolved as the result of mutually reinforcing feedback loops within the physical and social environments. The domain-specific social intelligence hypothesis proposes that social complexity drives only the evolution of cognitive abilities adapted only to social domains. The domain-general hypothesis proposes that the unique demands of social life serve as a bootstrap for the evolution of superior general cognition. Big cats are one of the few systems in which we can directly address conflicting predictions of the domain-general and domain-specific hypothesis by comparing cognition among closely related species that face roughly equivalent ecological complexity but vary considerably in social complexity. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. The Relationships of Working Memory, Secondary Memory, and General Fluid Intelligence: Working Memory is Special

    PubMed Central

    Shelton, Jill Talley; Elliott, Emily M.; Matthews, Russell A.; Hill, B. D.; Gouvier, Wm. Drew

    2010-01-01

    Recent efforts have been made to elucidate the commonly observed link between working memory and reasoning ability. The results have been inconsistent, with some work suggesting the emphasis placed on retrieval from secondary memory by working memory tests is the driving force behind this association (Mogle, Lovett, Stawski, & Sliwinski, 2008), while other research suggests retrieval from secondary memory is only partly responsible for the observed link between working memory and reasoning (Unsworth & Engle, 2006, 2007b). The present study investigates the relationship between processing speed, working memory, secondary memory, primary memory, and fluid intelligence. Although our findings show all constructs are significantly correlated with fluid intelligence, working memory, but not secondary memory, accounts for significant unique variance in fluid intelligence. Our data support predictions made by Unsworth and Engle, and suggest that the combined need for maintenance and retrieval processes present in working memory tests makes them “special” in their prediction of higher-order cognition. PMID:20438278

  12. Can enforced behaviour change attitudes: exploring the influence of Intelligent Speed Adaptation.

    PubMed

    Chorlton, Kathryn; Conner, Mark

    2012-09-01

    The Theory of Planned Behaviour model (Ajzen, 1985) was used to determine whether long-term experience with Intelligent Speed Adaption (ISA) prompts a change in speed related cognitions. The study examines data collected as part of a project examining driver behaviour with an intervening but overridable ISA system. Data was collected in four six-month field trials. The trials followed an A-B-A design (28 days driving with no ISA, 112 days driving with ISA, 28 days driving without ISA) to monitor changes in speeding behaviour as a result of the ISA system and any carry-over effect of the system. Findings suggested that following experience with the system, drivers' intention to speed significantly weakened, beyond the removal of ISA support. Drivers were also less likely to believe that exceeding the speed would 'get them to their destination more quickly' and less likely to believe that 'being in a hurry' would facilitate speeding. However, the positive change in intentions and beliefs failed to translate into behaviour. Experience with the ISA system significantly reduced the percentage of distance travelled whilst exceeding the speed limit but this effect was not evident when the ISA support was removed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Propulsion and power for 21st century aviation

    NASA Astrophysics Data System (ADS)

    Sehra, Arun K.; Whitlow, Woodrow

    2004-05-01

    Air transportation in the new millennium will require revolutionary solutions to meet public demand for improving safety, reliability, environmental compatibility, and affordability. NASA's vision for 21st century aircraft is to develop propulsion systems that are intelligent, highly efficient, virtually inaudible (outside airport boundaries), and have near zero harmful emissions (CO 2 and NO x). This vision includes intelligent engines capable of adapting to changing internal and external conditions to optimally accomplish missions with either minimal or no human intervention. Distributed vectored propulsion will replace current two to four wing mounted and fuselage mounted engine configurations with a large number of small, mini, or micro engines. Other innovative concepts, such as the pulse detonation engine (PDE), which potentially can replace conventional gas turbine engines, also are reviewed. It is envisioned that a hydrogen economy will drive the propulsion system revolution towards the ultimate goal of silent aircrafts with zero harmful emissions. Finally, it is envisioned that electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. This paper reviews future propulsion and power concepts that are under development at the National Aeronautics and Space Administration's (NASA) John H. Glenn Research Center at Lewis Field, Cleveland, Ohio, USA.

  14. A Brief Overview of NASA Glenn Research Center Sensor and Electronics Activities

    NASA Technical Reports Server (NTRS)

    Hunter, Gary W.

    2012-01-01

    Aerospace applications require a range of sensing technologies. There is a range of sensor and sensor system technologies being developed using microfabrication and micromachining technology to form smart sensor systems and intelligent microsystems. Drive system intelligence to the local (sensor) level -- distributed smart sensor systems. Sensor and sensor system development examples: (1) Thin-film physical sensors (2) High temperature electronics and wireless (3) "lick and stick" technology. NASA GRC is a world leader in aerospace sensor technology with a broad range of development and application experience. Core microsystems technology applicable to a range of application environmentS.

  15. On the Inevitable Intertwining of Requirements and Architecture

    NASA Astrophysics Data System (ADS)

    Sutcliffe, Alistair

    The chapter investigates the relationship between architecture and requirements, arguing that architectural issues need to be addressed early in the RE process. Three trends are driving architectural implications for RE: the growth of intelligent, context-aware and adaptable systems. First the relationship between architecture and requirements is considered from a theoretical viewpoint of problem frames and abstract conceptual models. The relationships between architectural decisions and non-functional requirements is reviewed, and then the impact of architecture on the RE process is assessed using a case study of developing configurable, semi-intelligent software to support medical researchers in e-science domains.

  16. Hemispheric asymmetry in auditory processing of speech envelope modulations in prereading children.

    PubMed

    Vanvooren, Sophie; Poelmans, Hanne; Hofmann, Michael; Ghesquière, Pol; Wouters, Jan

    2014-01-22

    The temporal envelope of speech is an important cue contributing to speech intelligibility. Theories about the neural foundations of speech perception postulate that the left and right auditory cortices are functionally specialized in analyzing speech envelope information at different time scales: the right hemisphere is thought to be specialized in processing syllable rate modulations, whereas a bilateral or left hemispheric specialization is assumed for phoneme rate modulations. Recently, it has been found that this functional hemispheric asymmetry is different in individuals with language-related disorders such as dyslexia. Most studies were, however, performed in adults and school-aged children, and only a little is known about how neural auditory processing at these specific rates manifests and develops in very young children before reading acquisition. Yet, studying hemispheric specialization for processing syllable and phoneme rate modulations in preliterate children may reveal early neural markers for dyslexia. In the present study, human cortical evoked potentials to syllable and phoneme rate modulations were measured in 5-year-old children at high and low hereditary risk for dyslexia. The results demonstrate a right hemispheric preference for processing syllable rate modulations and a symmetric pattern for phoneme rate modulations, regardless of hereditary risk for dyslexia. These results suggest that, while hemispheric specialization for processing syllable rate modulations seems to be mature in prereading children, hemispheric specialization for phoneme rate modulation processing may still be developing. These findings could have important implications for the development of phonological and reading skills.

  17. Executive abilities in children with congenital visual impairment in mid-childhood.

    PubMed

    Bathelt, Joe; de Haan, Michelle; Salt, Alison; Dale, Naomi Jane

    2018-02-01

    The role of vision and vision deprivation in the development of executive function (EF) abilities in childhood is little understood; aspects of EF such as initiative, attention orienting, inhibition, planning and performance monitoring are often measured through visual tasks. Studying the development and integrity of EF abilities in children with congenital visual impairment (VI) may provide important insights into the development of EF and also its possible relationship with vision and non-visual senses. The current study investigates non-visual EF abilities in 18 school-age children of average verbal intelligence with VI of differing levels of severity arising from congenital disorders affecting the eye, retina, or anterior optic nerve. Standard auditory neuropsychological assessments of sustained and divided attention, phonemic, semantic and switching verbal fluency, verbal working memory, and ratings of everyday executive abilities by parents were undertaken. Executive skills were compared to age-matched typically-sighted (TS) typically-developing children and across levels of vision (mild to moderate VI [MVI] or severe to profound VI [SPVI]). The results do not indicate significant differences or deficits on direct assessments of verbal and auditory EF between the groups. However, parent ratings suggest difficulties with everyday executive abilities, with the greatest difficulties in those with SPVI. The findings are discussed as possibly reflecting increased demands of behavioral executive skills for children with VI in everyday situations despite auditory and verbal EF abilities in the typical range for their age. These findings have potential implications for clinical and educational practices.

  18. Long-term pitch memory for music recordings is related to auditory working memory precision.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon Lm; Nusbaum, Howard C

    2018-04-01

    Most individuals have reliable long-term memories for the pitch of familiar music recordings. This pitch memory (1) appears to be normally distributed in the population, (2) does not depend on explicit musical training and (3) only seems to be weakly related to differences in listening frequency estimates. The present experiment was designed to assess whether individual differences in auditory working memory could explain variance in long-term pitch memory for music recordings. In Experiment 1, participants first completed a musical note adjustment task that has been previously used to assess working memory of musical pitch. Afterward, participants were asked to judge the pitch of well-known music recordings, which either had or had not been shifted in pitch. We found that performance on the pitch working memory task was significantly related to performance in the pitch memory task using well-known recordings, even when controlling for overall musical experience and familiarity with each recording. In Experiment 2, we replicated these findings in a separate group of participants while additionally controlling for fluid intelligence and non-pitch-based components of auditory working memory. In Experiment 3, we demonstrated that participants could not accurately judge the pitch of unfamiliar recordings, suggesting that our method of pitch shifting did not result in unwanted acoustic cues that could have aided participants in Experiments 1 and 2. These results, taken together, suggest that the ability to maintain pitch information in working memory might lead to more accurate long-term pitch memory.

  19. Neural time course of visually enhanced echo suppression.

    PubMed

    Bishop, Christopher W; London, Sam; Miller, Lee M

    2012-10-01

    Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.

  20. Cochlear implantation outcomes in children with common cavity deformity; a retrospective study.

    PubMed

    Zhang, Li; Qiu, Jianxin; Qin, Feifei; Zhong, Mei; Shah, Gyanendra

    2017-09-01

    A common cavity deformity (CCD) is a deformed inner ear in which the cochlea and vestibule are confluent forming a common rudimentary cystic cavity that results in profound hearing loss. There are few studies paying attention to common cavity. Our group is engrossed in observing the improvement of auditory and verbal abilities in children who have received cochlear implantation (CI), and comparing these targets between children with common cavity and normal inner ear structure. A retrospective study was conducted in 12 patients with profound hearing loss that were divided into a common cavity group and a control group, six in each group matched in sex, age and time of implantation, based on inner ear structure. Categories of Auditory Performance (CAP) and speech intelligibility rating (SIR) scores and aided hearing thresholds were collected and compared between the two groups. All patients wore CI for more than 1 year at the Cochlear Center of Anhui Medical University from 2011 to 2015. Postoperative CAP and SIR scores were higher than before operation in both groups (p < 0.05), although the scores were lower in the CCD group than in the control group (p < 0.05). The aided threshold was also lower in the control group than in the CCD group (p < 0.05). Even though audiological improvement in children with CCD was not as good as in those without CCD, CI provides benefits in auditory perception and communication skills in these children.

  1. Auditory Scene Analysis: The Sweet Music of Ambiguity

    PubMed Central

    Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A.

    2011-01-01

    In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701

  2. In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Rönnberg, Jerker; Rudner, Mary

    2018-06-19

    Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  3. Still Building Rafts, Juggling Balls and Driving Tanks?

    ERIC Educational Resources Information Center

    Beard, Colin; Wilson, John

    2002-01-01

    A model presents experiential learning as a combination lock. Outdoor environmental elements, activities, senses, emotions, forms of intelligence, and ways of learning are grouped into six "tumblers" that can be arranged into combinations that best help learners interact with the external environment through their senses, thus generating…

  4. Peerless

    ERIC Educational Resources Information Center

    Stuart, Reginald

    2011-01-01

    When Norman Francis arrived at Xavier University of Louisiana in 1948 as a first-generation college student fresh out of high school from the poor side of Lafayette, Louisiana, his drive, intelligence, discipline and winning personality quickly earned him election as freshman class president. It was the start of something big. Today, Dr. Francis…

  5. Evaluation report for ITS for voluntary emission reduction : an ITS operational test for real-time vehicle emissions detection

    DOT National Transportation Integrated Search

    1997-05-01

    The Intelligent Transport Systems (ITS) Operation Test Project was designed to assess the potential of ITS to support cleaner air by providing real-time vehicle tailpipe emissions information (carbon monoxide levels) to the driving public. It made...

  6. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    PubMed

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode.

  7. Audio-visual speech perception in prelingually deafened Japanese children following sequential bilateral cochlear implantation.

    PubMed

    Yamamoto, Ryosuke; Naito, Yasushi; Tona, Risa; Moroto, Saburo; Tamaya, Rinko; Fujiwara, Keizo; Shinohara, Shogo; Takebayashi, Shinji; Kikuchi, Masahiro; Michida, Tetsuhiko

    2017-11-01

    An effect of audio-visual (AV) integration is observed when the auditory and visual stimuli are incongruent (the McGurk effect). In general, AV integration is helpful especially in subjects wearing hearing aids or cochlear implants (CIs). However, the influence of AV integration on spoken word recognition in individuals with bilateral CIs (Bi-CIs) has not been fully investigated so far. In this study, we investigated AV integration in children with Bi-CIs. The study sample included thirty one prelingually deafened children who underwent sequential bilateral cochlear implantation. We assessed their responses to congruent and incongruent AV stimuli with three CI-listening modes: only the 1st CI, only the 2nd CI, and Bi-CIs. The responses were assessed in the whole group as well as in two sub-groups: a proficient group (syllable intelligibility ≥80% with the 1st CI) and a non-proficient group (syllable intelligibility < 80% with the 1st CI). We found evidence of the McGurk effect in each of the three CI-listening modes. AV integration responses were observed in a subset of incongruent AV stimuli, and the patterns observed with the 1st CI and with Bi-CIs were similar. In the proficient group, the responses with the 2nd CI were not significantly different from those with the 1st CI whereas in the non-proficient group the responses with the 2nd CI were driven by visual stimuli more than those with the 1st CI. Our results suggested that prelingually deafened Japanese children who underwent sequential bilateral cochlear implantation exhibit AV integration abilities, both in monaural listening as well as in binaural listening. We also observed a higher influence of visual stimuli on speech perception with the 2nd CI in the non-proficient group, suggesting that Bi-CIs listeners with poorer speech recognition rely on visual information more compared to the proficient subjects to compensate for poorer auditory input. Nevertheless, poorer quality auditory input with the 2nd CI did not interfere with AV integration with binaural listening (with Bi-CIs). Overall, the findings of this study might be used to inform future research to identify the best strategies for speech training using AV integration effectively in prelingually deafened children. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A Study of Lane Detection Algorithm for Personal Vehicle

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kazuyuki; Watanabe, Kajiro; Ohkubo, Tomoyuki; Kurihara, Yosuke

    By the word “Personal vehicle”, we mean a simple and lightweight vehicle expected to emerge as personal ground transportation devices. The motorcycle, electric wheelchair, motor-powered bicycle, etc. are examples of the personal vehicle and have been developed as the useful for transportation for a personal use. Recently, a new types of intelligent personal vehicle called the Segway has been developed which is controlled and stabilized by using on-board intelligent multiple sensors. The demand for needs for such personal vehicles are increasing, 1) to enhance human mobility, 2) to support mobility for elderly person, 3) reduction of environmental burdens. Since rapidly growing personal vehicles' market, a number of accidents caused by human error is also increasing. The accidents are caused by it's drive ability. To enhance or support drive ability as well as to prevent accidents, intelligent assistance is necessary. One of most important elemental functions for personal vehicle is robust lane detection. In this paper, we develop a robust lane detection method for personal vehicle at outdoor environments. The proposed lane detection method employing a 360 degree omni directional camera and unique robust image processing algorithm. In order to detect lanes, combination of template matching technique and Hough transform are employed. The validity of proposed lane detection algorithm is confirmed by actual developed vehicle at various type of sunshined outdoor conditions.

  9. 77 FR 22290 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-13

    ... of Defense Intelligence Information System (DoDIIS) Customer Relationship Management.'' System.... * Mail: Federal Docket Management System Office, 4800 Mark Center Drive, East Tower, 2nd Floor, Suite... the Office of Management and Budget (OMB) pursuant to paragraph 4c of Appendix I to OMB Circular No. A...

  10. 75 FR 13643 - ITS Joint Program Office; Intelligent Transportation Systems Program Advisory Committee; Notice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... Advisory Committee; Notice of Meeting AGENCY: Research and Innovative Technology Administration, U.S... Plan; (5) Evolution of IntelliDrive\\SM\\; (6) ITS Strategic Research Plan, 2010-2014; (7) University... Technology Administration, ITS Joint Program Office, Attention: Stephen Glasscock, 1200 New Jersey Avenue, SE...

  11. Criminal Organizations and Illicit Trafficking in Guatemala’s Border Communities

    DTIC Science & Technology

    2011-12-01

    concentrated in the army intelligence services and the Presidential General Staff ( Estado Mayor Presidencial). Notably, the return to civilian rule did not...34 Trade liberalization in the 1980s hurt the region’s farmers, particularly corn producers, driving them to migrate or seek other work. Until the

  12. The 21st annual intelligent ground vehicle competition: robotists for the future

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.

    2013-12-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 21 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the fourday competition are highlighted. Finally, an assessment of the competition based on participation is presented.

  13. Ventral striatal prediction error signaling is associated with dopamine synthesis capacity and fluid intelligence

    PubMed Central

    Schlagenhauf, Florian; Rapp, Michael A.; Huys, Quentin J. M.; Beck, Anne; Wüstenberg, Torsten; Deserno, Lorenz; Buchholz, Hans-Georg; Kalbitzer, Jan; Buchert, Ralph; Kienast, Thorsten; Cumming, Paul; Plotkin, Michail; Kumakura, Yoshitaka; Grace, Anthony A.; Dolan, Raymond J.; Heinz, Andreas

    2013-01-01

    Fluid intelligence represents the capacity for flexible problem solving and rapid behavioral adaptation. Rewards drive flexible behavioral adaptation, in part via a teaching signal expressed as reward prediction errors in the ventral striatum, which has been associated with phasic dopamine release in animal studies. We examined a sample of 28 healthy male adults using multimodal imaging and biological parametric mapping with 1) functional magnetic resonance imaging during a reversal learning task and 2) in a subsample of 17 subjects also with positron emission tomography using 6-[18F]fluoro-L-DOPA to assess dopamine synthesis capacity. Fluid intelligence was measured using a battery of nine standard neuropsychological tests. Ventral striatal BOLD correlates of reward prediction errors were positively correlated with fluid intelligence and, in the right ventral striatum, also inversely correlated with dopamine synthesis capacity (FDOPA Kinapp). When exploring aspects of fluid intelligence, we observed that prediction error signaling correlates with complex attention and reasoning. These findings indicate that individual differences in the capacity for flexible problem solving may be driven by ventral striatal activation during reward-related learning, which in turn proved to be inversely associated with ventral striatal dopamine synthesis capacity. PMID:22344813

  14. The 20th annual intelligent ground vehicle competition: building a generation of robotists

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.; Kosinski, Andrew

    2013-01-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 20 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the four-day competition are highlighted. Finally, an assessment of the competition based on participation is presented.

  15. From CNTNAP2 to Early Expressive Language in Infancy: The Mediation Role of Rapid Auditory Processing.

    PubMed

    Riva, Valentina; Cantiani, Chiara; Benasich, April A; Molteni, Massimo; Piazza, Caterina; Giorda, Roberto; Dionne, Ginette; Marino, Cecilia

    2018-06-01

    Although it is clear that early language acquisition can be a target of CNTNAP2, the pathway between gene and language is still largely unknown. This research focused on the mediation role of rapid auditory processing (RAP). We tested RAP at 6 months of age by the use of event-related potentials, as a mediator between common variants of the CNTNAP2 gene (rs7794745 and rs2710102) and 20-month-old language outcome in a prospective longitudinal study of 96 Italian infants. The mediation model examines the hypothesis that language outcome is explained by a sequence of effects involving RAP and CNTNAP2. The ability to discriminate spectrotemporally complex auditory frequency changes at 6 months of age mediates the contribution of rs2710102 to expressive vocabulary at 20 months. The indirect effect revealed that rs2710102 C/C was associated with lower P3 amplitude in the right hemisphere, which, in turn, predicted poorer expressive vocabulary at 20 months of age. These findings add to a growing body of literature implicating RAP as a viable marker in genetic studies of language development. The results demonstrate a potential developmental cascade of effects, whereby CNTNAP2 drives RAP functioning that, in turn, contributes to early expressive outcome.

  16. Does assisted driving behavior lead to safety-critical encounters with unequipped vehicles' drivers?

    PubMed

    Preuk, Katharina; Stemmler, Eric; Schießl, Caroline; Jipp, Meike

    2016-10-01

    With Intelligent Transport Systems (e.g., traffic light assistance systems) assisted drivers are able to show driving behavior in anticipation of upcoming traffic situations. In the years to come, the penetration rate of such systems will be low. Therefore, the majority of vehicles will not be equipped with these systems. Unequipped vehicles' drivers may not expect the driving behavior of assisted drivers. However, drivers' predictions and expectations can play a significant role in their reaction times. Thus, safety issues could arise when unequipped vehicles' drivers encounter driving behavior of assisted drivers. This is why we tested how unequipped vehicles' drivers (N=60) interpreted and reacted to the driving behavior of an assisted driver. We used a multi-driver simulator with three drivers. The three drivers were driving in a line. The lead driver in the line was a confederate who was followed by two unequipped vehicles' drivers. We varied the equipment of the confederate with an Intelligent Transport System: The confederate was equipped either with or without a traffic light assistance system. The traffic light assistance system provided a start-up maneuver before a light turned green. Therefore, the assisted confederate seemed to show unusual deceleration behavior by coming to a halt at an unusual distance from the stop line at the red traffic light. The unusual distance was varied as we tested a moderate (4m distance from the stop line) and an extreme (10m distance from the stop line) parameterization of the system. Our results showed that the extreme parametrization resulted in shorter minimal time-to-collision of the unequipped vehicles' drivers. One rear-end crash was observed. These results provided initial evidence that safety issues can arise when unequipped vehicles' drivers encounter assisted driving behavior. We recommend that future research identifies counteractions to prevent these safety issues. Moreover, we recommend that system developers discuss the best parameterizations of their systems to ensure benefits but also the safety in encounters with unequipped vehicles' drivers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Early musical training is linked to gray matter structure in the ventral premotor cortex and auditory-motor rhythm synchronization performance.

    PubMed

    Bailey, Jennifer Anne; Zatorre, Robert J; Penhune, Virginia B

    2014-04-01

    Evidence in animals and humans indicates that there are sensitive periods during development, times when experience or stimulation has a greater influence on behavior and brain structure. Sensitive periods are the result of an interaction between maturational processes and experience-dependent plasticity mechanisms. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 show enhancements in behavior and white matter structure compared with those who begin later. Plastic changes in white matter and gray matter are hypothesized to co-occur; therefore, the current study investigated possible differences in gray matter structure between early-trained (ET; <7) and late-trained (LT; >7) musicians, matched for years of experience. Gray matter structure was assessed using voxel-wise analysis techniques (optimized voxel-based morphometry, traditional voxel-based morphometry, and deformation-based morphometry) and surface-based measures (cortical thickness, surface area and mean curvature). Deformation-based morphometry analyses identified group differences between ET and LT musicians in right ventral premotor cortex (vPMC), which correlated with performance on an auditory motor synchronization task and with age of onset of musical training. In addition, cortical surface area in vPMC was greater for ET musicians. These results are consistent with evidence that premotor cortex shows greatest maturational change between the ages of 6-9 years and that this region is important for integrating auditory and motor information. We propose that the auditory and motor interactions required by musical practice drive plasticity in vPMC and that this plasticity is greatest when maturation is near its peak.

  18. A flexible routing scheme for patients with topographical disorientation.

    PubMed

    Torres-Solis, Jorge; Chau, Tom

    2007-11-28

    Individuals with topographical disorientation have difficulty navigating through indoor environments. Recent literature has suggested that ambient intelligence technologies may provide patients with navigational assistance through auditory or graphical instructions delivered via embedded devices. We describe an automatic routing engine for such an ambient intelligence system. The method routes patients with topographical disorientation through indoor environments by repeatedly computing the route of minimal cost from the current location of the patient to a specified destination. The cost of a given path not only reflects the physical distance between end points, but also incorporates individual patient abilities, the presence of mobility-impeding physical barriers within a building and the dynamic nature of the indoor environment. We demonstrate the method by routing simulated patients with either topographical disorientation or physical disabilities. Additionally, we exemplify the ability to route a patient from source to destination while taking into account changes to the building interior. When compared to a random walk, the proposed routing scheme offers potential cost-savings even when the patient follows only a subset of instructions. The routing method presented reduces the navigational effort for patients with topographical disorientation in indoor environments, accounting for physical abilities of the patient, environmental barriers and dynamic building changes. The routing algorithm and database proposed could be integrated into wearable and mobile platforms within the context of an ambient intelligence solution.

  19. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology.

    PubMed

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2013-01-02

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.

  20. General Mathematical Ability Predicts PASAT Performance in MS Patients: Implications for Clinical Interpretation and Cognitive Reserve.

    PubMed

    Sandry, Joshua; Paxton, Jessica; Sumowski, James F

    2016-03-01

    The Paced Auditory Serial Addition Test (PASAT) is used to assess cognitive status in multiple sclerosis (MS). Although the mathematical demands of the PASAT seem minor (single-digit arithmetic), cognitive psychology research links greater mathematical ability (e.g., algebra, calculus) to more rapid retrieval of single-digit math facts (e.g., 5+6=11). The present study evaluated the hypotheses that (a) mathematical ability is related to PASAT performance and (b) both the relationship between intelligence and PASAT performance as well as the relationship between education and PASAT performance are both mediated by mathematical ability. Forty-five MS patients were assessed using the Wechsler Test of Adult Reading, PASAT and Calculation Subtest of the Woodcock-Johnson-III. Regression based path analysis and bootstrapping were used to compute 95% confidence intervals and test for mediation. Mathematical ability (a) was related to PASAT (β=.61; p<.001) and (b) fully mediated the relationship between Intelligence and PASAT (β=.76; 95% confidence interval (CI95)=.28, 1.45; direct effect of Intelligence, β=.42; CI95=-.39, 1.23) as well as the relationship between Education and PASAT (β=2.43, CI95=.81, 5.16, direct effect of Education, β=.83, CI95=-1.95, 3.61). Mathematical ability represents a source of error in the clinical interpretation of cognitive decline using the PASAT. Domain-specific cognitive reserve is discussed.

  1. United States Air Force High School Apprenticeship Program: 1989 Program Management Report. Volume 3

    DTIC Science & Technology

    1988-12-01

    orrtent of visual or auditory stimulus exposed to the eyes o- oays -t a level below normal threshold, it is possible to perceive the subliminal stimuli...usually to small or vague to be consciously recognized, but they are declared to influence the 87-6 viewer’s subconsc’ious sex drive. Stimulation below...programs the mechanisms to stimulate career interests in science and technology in high school students showing promise in these areas. The Air Force High

  2. Computational Modeling of Age-Differences In a Visually Demanding Driving Task: Vehicle Detection

    DTIC Science & Technology

    1997-10-07

    overall estimate of d’ for each scene was calculated from the two levels using the method described in MacMillan and Creelman [13]. MODELING VEHICLE...Scialfa, "Visual and auditory aging," In J. Birren & K. W. Schaie (Eds.) Handbook of the Psychology of Aging (4th edition), 1996, New York: Academic...Computational models of Visual Processing, 1991, Boston MA: MIT Press. [13] N. A. MacMillan & C. D. Creelman , Detection Theory: A User’s Guide, 1991

  3. A Risk Radar driven by Internet of intelligences serving for emergency management in community.

    PubMed

    Huang, Chongfu; Wu, Tong; Renn, Ortwin

    2016-07-01

    Today, most of the commercial risk radars only have the function to show risks, as same as a set of risk matrixes. In this paper, we develop the Internet of intelligences (IOI) to drive a risk radar monitoring dynamic risks for emergency management in community. An IOI scans risks in a community by 4 stages: collecting information and experience about risks; evaluating risk incidents; verifying; and showing risks. Employing the information diffusion method, we optimized to deal with the effective information for calculating risk value. Also, a specific case demonstrates the reliability and practicability of risk radar. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Use of Questionnaire-Based Measures in the Assessment of Listening Difficulties in School-Aged Children

    PubMed Central

    Tomlin, Danielle; Moore, David R.; Dillon, Harvey

    2015-01-01

    Objectives: In this study, the authors assessed the potential utility of a recently developed questionnaire (Evaluation of Children’s Listening and Processing Skills [ECLiPS]) for supporting the clinical assessment of children referred for auditory processing disorder (APD). Design: A total of 49 children (35 referred for APD assessment and 14 from mainstream schools) were assessed for auditory processing (AP) abilities, cognitive abilities, and symptoms of listening difficulty. Four questionnaires were used to capture the symptoms of listening difficulty from the perspective of parents (ECLiPS and Fisher’s auditory problem checklist), teachers (Teacher’s Evaluation of Auditory Performance), and children, that is, self-report (Listening Inventory for Education). Correlation analyses tested for convergence between the questionnaires and both cognitive and AP measures. Discriminant analyses were performed to determine the best combination of tests for discriminating between typically developing children and children referred for APD. Results: All questionnaires were sensitive to the presence of difficulty, that is, children referred for assessment had significantly more symptoms of listening difficulty than typically developing children. There was, however, no evidence of more listening difficulty in children meeting the diagnostic criteria for APD. Some AP tests were significantly correlated with ECLiPS factors measuring related abilities providing evidence for construct validity. All questionnaires correlated to a greater or lesser extent with the cognitive measures in the study. Discriminant analysis suggested that the best discrimination between groups was achieved using a combination of ECLiPS factors, together with nonverbal Intelligence Quotient (cognitive) and AP measures (i.e., dichotic digits test and frequency pattern test). Conclusions: The ECLiPS was particularly sensitive to cognitive difficulties, an important aspect of many children referred for APD, as well as correlating with some AP measures. It can potentially support the preliminary assessment of children referred for APD. PMID:26002277

  5. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.

    PubMed

    Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".

  6. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians

    PubMed Central

    Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330

  7. The Auditory-Brainstem Response to Continuous, Non-repetitive Speech Is Modulated by the Speech Envelope and Reflects Speech Processing

    PubMed Central

    Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias

    2016-01-01

    The auditory-brainstem response (ABR) to short and simple acoustical signals is an important clinical tool used to diagnose the integrity of the brainstem. The ABR is also employed to investigate the auditory brainstem in a multitude of tasks related to hearing, such as processing speech or selectively focusing on one speaker in a noisy environment. Such research measures the response of the brainstem to short speech signals such as vowels or words. Because the voltage signal of the ABR has a tiny amplitude, several hundred to a thousand repetitions of the acoustic signal are needed to obtain a reliable response. The large number of repetitions poses a challenge to assessing cognitive functions due to neural adaptation. Here we show that continuous, non-repetitive speech, lasting several minutes, may be employed to measure the ABR. Because the speech is not repeated during the experiment, the precise temporal form of the ABR cannot be determined. We show, however, that important structural features of the ABR can nevertheless be inferred. In particular, the brainstem responds at the fundamental frequency of the speech signal, and this response is modulated by the envelope of the voiced parts of speech. We accordingly introduce a novel measure that assesses the ABR as modulated by the speech envelope, at the fundamental frequency of speech and at the characteristic latency of the response. This measure has a high signal-to-noise ratio and can hence be employed effectively to measure the ABR to continuous speech. We use this novel measure to show that the ABR is weaker to intelligible speech than to unintelligible, time-reversed speech. The methods presented here can be employed for further research on speech processing in the auditory brainstem and can lead to the development of future clinical diagnosis of brainstem function. PMID:27303286

  8. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  9. Single- and dual-task performance during on-the-road driving at a low and moderate dose of alcohol: A comparison between young novice and more experienced drivers.

    PubMed

    Jongen, Stefan; van der Sluiszen, Nick N J J M; Brown, Dennis; Vuurman, Eric F P M

    2018-05-01

    Driving experience and alcohol are two factors associated with a higher risk of crash involvement in young novice drivers. Driving a car is a complex task involving multiple tasks leading to dividing attention. The aim of this study was to compare the single and combined effects of a low and moderate dose of alcohol on single- and dual-task performance between young novice and more experienced young drivers during actual driving. Nine healthy novice drivers were compared with 9 more experienced drivers in a three-way, placebo-controlled, cross-over study design. Driving performance was measured in actual traffic, with standard deviation of lateral position as the primary outcome variable. Secondary task performance was measured with an auditory word learning test during driving. Results showed that standard deviation of lateral position increased dose-dependently at a blood alcohol concentration (BAC) of 0.2 and 0.5 g/L in both novice and experienced drivers. Secondary task performance was impaired in both groups at a BAC of 0.5 g/L. Furthermore, it was found that driving performance in novice drivers was already impaired at a BAC of 0.2 g/L during dual-task performance. The findings suggest that young inexperienced drivers are especially vulnerable to increased mental load while under the influence of alcohol. © 2018 The Authors Human Psychopharmacology: Clinical and Experimental Published by John Wiley & Sons Ltd.

  10. Ambient Intelligence 2.0: Towards Synergetic Prosperity

    NASA Astrophysics Data System (ADS)

    Aarts, Emile; Grotenhuis, Frits

    Ten years of research in Ambient Intelligence have revealed that the original ideas and assertions about the way the concept should develop no longer hold and should be substantially revised. Early scenario's in Ambient Intelligence envisioned a world in which individuals could maximally exploit personalized, context aware, wireless devices thus enabling them to become maximally productive, while living at an unprecedented pace. Environments would become smart and proactive, enriching and enhancing the experience of participants thus supporting maximum leisure possibly even at the risk of alienation. New insights have revealed that these brave new world scenarios are no longer desirable and that people are more in for a balanced approach in which technology should serve people instead of driving them to the max. We call this novel approach Synergetic Prosperity, referring to meaningful digital solutions that balance mind and body, and society and earth thus contributing to a prosperous and sustainable development of mankind.

  11. Intelligent mobility for robotic vehicles in the army after next

    NASA Astrophysics Data System (ADS)

    Gerhart, Grant R.; Goetz, Richard C.; Gorsich, David J.

    1999-07-01

    The TARDEC Intelligent Mobility program addresses several essential technologies necessary to support the army after next (AAN) concept. Ground forces in the AAN time frame will deploy robotic unmanned ground vehicles (UGVs) in high-risk missions to avoid exposing soldiers to both friendly and unfriendly fire. Prospective robotic systems will include RSTA/scout vehicles, combat engineering/mine clearing vehicles, indirect fire artillery and missile launch platforms. The AAN concept requires high on-road and off-road mobility, survivability, transportability/deployability and low logistics burden. TARDEC is developing a robotic vehicle systems integration laboratory (SIL) to evaluate technologies and their integration into future UGV systems. Example technologies include the following: in-hub electric drive, omni-directional wheel and steering configurations, off-road tires, adaptive tire inflation, articulated vehicles, active suspension, mine blast protection, detection avoidance and evasive maneuver. This paper will describe current developments in these areas relative to the TARDEC intelligent mobility program.

  12. Speech therapy in adolescents with Down syndrome: In pursuit of communication as a fundamental human right.

    PubMed

    Rvachew, Susan; Folden, Marla

    2018-02-01

    The achievement of speech intelligibility by persons with Down syndrome facilitates their participation in society. Denial of speech therapy services by virtue of low cognitive skills is a violation of their fundamental human rights as proclaimed in the Universal Declaration of Human Rights in general and in Article 19 in particular. Here, we describe the differential response of an adolescent with Down syndrome to three speech therapy interventions and demonstrate the use of a single subject randomisation design to identify effective treatments for children with complex communication disorders. Over six weeks, 18 speech therapy sessions were provided with treatment conditions randomly assigned to targets and sessions within weeks, specifically comparing auditory-motor integration prepractice and phonological planning prepractice to a control condition that included no prepractice. All treatments involved high intensity practice of nonsense word targets paired with tangible referents. A measure of generalisation from taught words to untaught real words in phrases revealed superior learning in the auditory-motor integration condition. The intervention outcomes may serve to justify the provision of appropriate supports to persons with Down syndrome so that they may achieve their full potential to receive information and express themselves.

  13. Reconstruction of audio waveforms from spike trains of artificial cochlea models

    PubMed Central

    Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii

    2015-01-01

    Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113

  14. Learning-induced neural plasticity of speech processing before birth

    PubMed Central

    Partanen, Eino; Kujala, Teija; Näätänen, Risto; Liitola, Auli; Sambeth, Anke; Huotilainen, Minna

    2013-01-01

    Learning, the foundation of adaptive and intelligent behavior, is based on plastic changes in neural assemblies, reflected by the modulation of electric brain responses. In infancy, auditory learning implicates the formation and strengthening of neural long-term memory traces, improving discrimination skills, in particular those forming the prerequisites for speech perception and understanding. Although previous behavioral observations show that newborns react differentially to unfamiliar sounds vs. familiar sound material that they were exposed to as fetuses, the neural basis of fetal learning has not thus far been investigated. Here we demonstrate direct neural correlates of human fetal learning of speech-like auditory stimuli. We presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. Furthermore, a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Moreover, the learning effect was generalized to other types of similar speech sounds not included in the training material. Consequently, our results indicate neural commitment specifically tuned to the speech features heard before birth and their memory representations. PMID:23980148

  15. Amplitude modulation detection by human listeners in sound fields.

    PubMed

    Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Srinivasan, Nirmal

    2011-10-01

    The temporal modulation transfer function (TMTF) approach allows techniques from linear systems analysis to be used to predict how the auditory system will respond to arbitrary patterns of amplitude modulation (AM). Although this approach forms the basis for a standard method of predicting speech intelligibility based on estimates of the acoustical modulation transfer function (MTF) between source and receiver, human sensitivity to AM as characterized by the TMTF has not been extensively studied under realistic listening conditions, such as in reverberant sound fields. Here, TMTFs (octave bands from 2 - 512 Hz) were obtained in 3 listening conditions simulated using virtual auditory space techniques: diotic, anechoic sound field, reverberant room sound field. TMTFs were then related to acoustical MTFs estimated using two different methods in each of the listening conditions. Both diotic and anechoic data were found to be in good agreement with classic results, but AM thresholds in the reverberant room were lower than predictions based on acoustical MTFs. This result suggests that simple linear systems techniques may not be appropriate for predicting TMTFs from acoustical MTFs in reverberant sound fields, and may be suggestive of mechanisms that functionally enhance modulation during reverberant listening.

  16. Outcomes of cochlear implantation in deaf children of deaf parents: comparative study.

    PubMed

    Hassanzadeh, S

    2012-10-01

    This retrospective study compared the cochlear implantation outcomes of first- and second-generation deaf children. The study group consisted of seven deaf, cochlear-implanted children with deaf parents. An equal number of deaf children with normal-hearing parents were selected by matched sampling as a reference group. Participants were matched based on onset and severity of deafness, duration of deafness, age at cochlear implantation, duration of cochlear implantation, gender, and cochlear implant model. We used the Persian Auditory Perception Test for the Hearing Impaired, the Speech Intelligibility Rating scale, and the Sentence Imitation Test, in order to measure participants' speech perception, speech production and language development, respectively. Both groups of children showed auditory and speech development. However, the second-generation deaf children (i.e. deaf children of deaf parents) exceeded the cochlear implantation performance of the deaf children with hearing parents. This study confirms that second-generation deaf children exceed deaf children of hearing parents in terms of cochlear implantation performance. Encouraging deaf children to communicate in sign language from a very early age, before cochlear implantation, appears to improve their ability to learn spoken language after cochlear implantation.

  17. 78 FR 22525 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-16

    ... Management System Office, 4800 Mark Center Drive; East Tower, 2nd Floor, Suite 02G09, Alexandria, VA 22350... 10-0004 System name: Occupational, Safety, Health, and Environmental Management Records (July 2, 2010...; System of Records AGENCY: Defense Intelligence Agency, DoD. ACTION: Notice to alter a System of Records...

  18. Application of Artificial Intelligence to the DoD Corporate Information Management (CIM) Program

    DTIC Science & Technology

    1992-04-01

    problem of balancing the investments of the corporation between several possible assets; buildings, machine tools, training, R&D and "information...and quality of worklife /learning/empowerment. For the moment the driving factor for the DoD has been identified as cost reduction, however it is clear

  19. 77 FR 39488 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-03

    ... entry and replace with ``Operations Management Branch, Attn: DAL-2B, Defense Intelligence Agency, 200.... * Mail: Federal Docket Management System Office, 4800 Mark Center Drive, East Tower, 2nd Floor, Suite... Management and Budget (OMB) pursuant to paragraph 4c of Appendix I to OMB Circular No. A-130, ``Federal...

  20. MOVANAID: An Interactive Aid for Analysis of Movement Capabilities.

    ERIC Educational Resources Information Center

    Cooper, George E.; And Others

    A computer-drive interactive aid for movement analysis, called MOVANAID, has been developed to be of assistance in the performance of certain Army intelligence processing tasks in a tactical environment. It can compute fastest travel times and paths through road networks for military units of various types, as well as fastest times in which…

  1. A Qualitative Synthesis of the Flynn Effect

    ERIC Educational Resources Information Center

    Ceci, Stephen J.; Williams, Wendy M.

    2016-01-01

    Clark et al. focus on the likely drivers of the Flynn effect (sociocultural, educational, technological), and imply that it is not a single causal agent driving the upward climb in IQ scores but perhaps multiple causes with different onsets. Given, the authors' conception of intelligence in terms of underlying attentional and cognitive resources…

  2. In Search of a New Paradigm for Higher Education

    ERIC Educational Resources Information Center

    Schejbal, David

    2012-01-01

    In this essay I argue that online education, artificial intelligence, and market pressures are driving higher education to adopt the industrial model and to find a new paradigm for delivering education at low costs. In addition, there is tremendous pressure from the federal government to make universities more accountable while making higher…

  3. Functional role of delta and theta band oscillations for auditory feedback processing during vocal pitch motor control

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2015-01-01

    The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM), relative pitch (RP), and absolute pitch (AP) musicians maintained vocalizations of a vowel sound and received randomized ± 100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked) fronto-central activation within the theta band (5–8 Hz) that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced) frontal activation within the delta band (1–4 Hz) that emerged at approximately 1 s after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE), indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control. PMID:25873858

  4. An intelligent multi-media human-computer dialogue system

    NASA Technical Reports Server (NTRS)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  5. Driver state examination--Treading new paths.

    PubMed

    Wascher, Edmund; Getzmann, Stephan; Karthaus, Melanie

    2016-06-01

    A large proportion of crashes in road driving can be attributed to driver fatigue. Several types of fatigue are discussed, comprising sleep-related fatigue, active task-related fatigue (as a consequence of workload in demanding driving situations) as well as passive task-related fatigue (as related to monotonous driving situations). The present study investigated actual states of fatigue in a monotonous driving situation, using EEG measures and a long-lasting driving simulation experiment, in which drivers had to keep the vehicle on track by compensating crosswind of different strength. Performance data and electrophysiological correlates of mental fatigue (EEG Alpha and Theta power, Inter Trial Coherence (ITC), and auditory event-related potentials to short sound stimuli) were analyzed. Driving errors and driving lane variability increased with time on task and with increasing crosswind. The posterior Alpha and Theta power also increased with time on task, but decreased with stronger crosswind. The P3a to sound stimuli decreased with time on task when the crosswind was weak, but remained stable when the crosswind was strong. The analysis of ITC revealed less frontal Alpha and Theta band synchronization with time on task, but no effect of crosswind. The results suggest that Alpha power in monotonous driving situations reflects boredom or attentional withdrawal due to monotony rather than the decline of processing abilities as a consequence of high mental effort. A more valid indicator of declining mental resources with increasing time on task seems to be provided by brain oscillatory synchronization measures and event-related activity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Initiative for safe driving and enhanced utilization of crash data

    NASA Astrophysics Data System (ADS)

    Wagner, John F.

    1994-03-01

    This initiative addresses the utilization of current technology to increase the efficiency of police officers to complete required Driving Under the Influence (DUI) forms and to enhance their ability to acquire and record crash and accident information. The project is a cooperative program among the New Mexico Alliance for Transportation Research (ATR), Science Applications International Corporation (SAIC), Los Alamos National Laboratory, and the New Mexico State Highway and Transportation Department. The approach utilizes an in-car computer and associated sensors for information acquisition and recording. Los Alamos artificial intelligence technology is leveraged to ensure ease of data entry and use.

  7. The University of Western Ontario Pediatric Audiological Monitoring Protocol (UWO PedAMP)

    PubMed Central

    Moodie, Sheila T.; Malandrino, April C.; Richert, Frances M.; Clench, Debbie A.; Scollie, Susan D.

    2011-01-01

    This study proposed and evaluated a guideline for outcome evaluation for infants and children with hearing loss who wear hearing aids. The University of Western Ontario Pediatric Audiological Monitoring Protocol (UWO PedAMP) was developed following a critical review of pediatric outcome evaluation tools and was systematically examined by the Network of Pediatric Audiologists of Canada. It consists of tools to gather clinical process outcomes as well as functional caregiver reports. The UWO PedAMP was administered to a clinical population of infants and children with hearing aids. Sixty-eight children were administered the functional outcome evaluation tools (i.e., caregiver reports) a total of 133 times. Clinical process outcomes of hearing aid verification (e.g., real-ear-to-coupler difference) revealed typical aided audibility (e.g., Speech Intelligibility Index). Results for the LittlEARS® questionnaire revealed that typically developing children with hearing loss who wear hearing aids are meeting auditory development milestones. Children with mild to moderate comorbidities displayed typical auditory development during the 1st year of life after which development began to decline. Children with complex factors related to hearing aid use had lower scores on the LittlEARS, but auditory development was in parallel to norms. Parents’ Evaluation of Aural/Oral Performance (PEACH) results indicated no age effect on scoring for children above 2 years of age; however, the effect of degree of hearing loss was significant. This work provides clinicians with a systematic, evidence-based outcome evaluation protocol to implement as part of a complete pediatric hearing aid fitting. PMID:22194316

  8. Processing of voices in deafness rehabilitation by auditory brainstem implant.

    PubMed

    Coez, Arnaud; Zilbovicius, Monica; Ferrary, Evelyne; Bouccara, Didier; Mosnier, Isabelle; Ambert-Dahan, Emmanuèle; Kalamarides, Michel; Bizaguet, Eric; Syrota, André; Samson, Yves; Sterkers, Olivier

    2009-10-01

    The superior temporal sulcus (STS) is specifically involved in processing the human voice. Profound acquired deafness by post-meningitis ossified cochlea and by bilateral vestibular schwannoma in neurofibromatosis type 2 patients are two indications for auditory brainstem implantation (ABI). In order to objectively measure the cortical voice processing of a group of ABI patients, we studied the activation of the human temporal voice areas (TVA) by PET H(2)(15)O, performed in a group of implanted deaf adults (n=7) with more than two years of auditory brainstem implant experience, with an intelligibility score average of 17%+/-17 [mean+/-SD]. Relative cerebral blood flow (rCBF) was measured in the three following conditions: during silence, while passive listening to human voice, and to non-voice stimuli. Compared to silence, the activations induced by voice and non-voice stimuli were bilaterally located in the superior temporal regions. However, compared to non-voice stimuli, the voice stimuli did not induce specific supplementary activation of the TVA along the STS. The comparison of ABI group with a normal-hearing controls group (n=7) showed that TVA activations were significantly enhanced among controls group. ABI allowed the transmission of sound stimuli to temporal brain regions but lacked transmitting the specific cues of the human voice to the TVA. Moreover, among groups, during silent condition, brain visual regions showed higher rCBF in ABI group, although temporal brain regions had higher rCBF in the controls group. ABI patients had consequently developed enhanced visual strategies to keep interacting with their environment.

  9. The effect of phosphatidylserine administration on memory and symptoms of attention-deficit hyperactivity disorder: a randomised, double-blind, placebo-controlled clinical trial.

    PubMed

    Hirayama, S; Terasawa, K; Rabeler, R; Hirayama, T; Inoue, T; Tatsumi, Y; Purpura, M; Jäger, R

    2014-04-01

    Attention-deficit hyperactivity disorder (ADHD) is the most commonly diagnosed behavioural disorder of childhood, affecting 3-5% of school-age children. The present study investigated whether the supplementation of soy-derived phosphatidylserine (PS), a naturally occurring phospholipid, improves ADHD symptoms in children. Thirty six children, aged 4-14 years, who had not previously received any drug treatment related to ADHD, received placebo (n = 17) or 200 mg day(-1) PS (n = 19) for 2 months in a randomised, double-blind manner. Main outcome measures included: (i) ADHD symptoms based on DSM-IV-TR; (ii) short-term auditory memory and working memory using the Digit Span Test of the Wechsler Intelligence Scale for Children; and (iii) mental performance to visual stimuli (GO/NO GO task). PS supplementation resulted in significant improvements in: (i) ADHD (P < 0.01), AD (P < 0.01) and HD (P < 0.01); (ii) short-term auditory memory (P < 0.05); and (iii) inattention (differentiation and reverse differentiation, P < 0.05) and inattention and impulsivity (P < 0.05). No significant differences were observed in other measurements and in the placebo group. PS was well-tolerated and showed no adverse effects. PS significantly improved ADHD symptoms and short-term auditory memory in children. PS supplementation might be a safe and natural nutritional strategy for improving mental performance in young children suffering from ADHD. © 2013 The Authors Journal of Human Nutrition and Dietetics © 2013 The British Dietetic Association Ltd.

  10. Is there a hearing aid for the thinking person?

    PubMed

    Hafter, Ervin R

    2010-10-01

    The history of auditory prosthesis has generally concentrated on bottom-up processing, that is, on audibility. However, a growing interest in top-down processing has focused on correlations between success with a hearing aid and such higher order processing as the patient's intelligence, problem solving and language skills, and the perceived effort of day-to-day listening. Examination of two cases of cognitive effects in hearing that illustrate less-often-studied issues: (1) Individual subjects in a study use different listening strategies, a fact that, if not known to the experimenter, can lead to errors in interpretation; (2) A measure of shared attention can point to otherwise unknown functional effects of an algorithm used in hearing aids. In the two examples described above: (1) Patients with cochlear implants served in a study of the binaural precedence effect, that is, echo suppression. (2) Individuals identifying speech-in-noise benefit from noise reduction (NR) when the criterion was improved performance in simultaneous tests of verbal memory or visual reaction times. Studies of hearing impairment, either in the laboratory or in a fitting session, should include study of the complex stimuli that make up the natural environment, conditions where the thinking auditory brain adopts strategies for dealing with large amounts of input data. In addition to well-known factors that must be included in communication, such things as familiarity, syntax, and semantics, the work here shows that strategic listening can affect even how we deal with seemingly simpler requirements, localizing sounds in a reverberant auditory scene and listening for speech in noise when busy with other cognitive tasks. American Academy of Audiology.

  11. Cognitive, sensory, and psychosocial characteristics in patients with Bardet-Biedl syndrome.

    PubMed

    Brinckman, Danielle D; Keppler-Noreuil, Kim M; Blumhorst, Catherine; Biesecker, Leslie G; Sapp, Julie C; Johnston, Jennifer J; Wiggs, Edythe A

    2013-12-01

    Forty-two patients with a clinical diagnosis of Bardet-Biedl syndrome ages 2-61 years were given a neuropsychological test battery to evaluate cognitive, sensory, and behavioral functioning. These tests included the Wechsler scales of intelligence, Rey Auditory Verbal Learning Test, Boston Naming Test, D-KEFS Verbal Fluency Test, D-KEFS Color-Word Interference Test, D-KEFS Sorting Test, Wide Range Achievement Test: Math and Reading Subtests, Purdue Pegboard, The University of Pennsylvania Smell Identification Test, Social Communication Questionnaire, Social Responsiveness Scale, and Behavior Assessment System for Children, Second Edition, Parent Rating Scale. On the age appropriate Wechsler scale, the mean Verbal Comprehension was 81 (n = 36), Working Memory was 81 (n = 36), Perceptual Reasoning was 78 (n = 24) and Full Scale IQ was 75 (n = 26). Memory for a word list (Rey Auditory Verbal Learning Test) was in the average range with a mean of 89 (n = 19). Fine motor speed was slow on the Purdue with mean scores 3-4 standard deviations below norms. All subjects were microsmic on the University of Pennsylvania Smell Identification Test. Of these 42 patients, only 6 were able to complete all auditory and visual tests; 52% were unable to complete the visual tests due to impaired vision. A wide range of behavioral issues were endorsed on questionnaires given to parents. Most had social skill deficits but no pattern of either externalizing or internalizing problems. We identify a characteristic neuro-behavioral profile in our cohort comprised of reduced IQ, impaired fine-motor function, and decreased olfaction. © 2013 Wiley Periodicals, Inc.

  12. Neurocognitive Correlates of Young Drivers' Performance in a Driving Simulator.

    PubMed

    Guinosso, Stephanie A; Johnson, Sara B; Schultheis, Maria T; Graefe, Anna C; Bishai, David M

    2016-04-01

    Differences in neurocognitive functioning may contribute to driving performance among young drivers. However, few studies have examined this relation. This pilot study investigated whether common neurocognitive measures were associated with driving performance among young drivers in a driving simulator. Young drivers (19.8 years (standard deviation [SD] = 1.9; N = 74)) participated in a battery of neurocognitive assessments measuring general intellectual capacity (Full-Scale Intelligence Quotient, FSIQ) and executive functioning, including the Stroop Color-Word Test (cognitive inhibition), Wisconsin Card Sort Test-64 (cognitive flexibility), and Attention Network Task (alerting, orienting, and executive attention). Participants then drove in a simulated vehicle under two conditions-a baseline and driving challenge. During the driving challenge, participants completed a verbal working memory task to increase demand on executive attention. Multiple regression models were used to evaluate the relations between the neurocognitive measures and driving performance under the two conditions. FSIQ, cognitive inhibition, and alerting were associated with better driving performance at baseline. FSIQ and cognitive inhibition were also associated with better driving performance during the verbal challenge. Measures of cognitive flexibility, orienting, and conflict executive control were not associated with driving performance under either condition. FSIQ and, to some extent, measures of executive function are associated with driving performance in a driving simulator. Further research is needed to determine if executive function is associated with more advanced driving performance under conditions that demand greater cognitive load. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  13. Game-like tasks for comparative research: leveling the playing field

    NASA Technical Reports Server (NTRS)

    Washburn, D. A.; Gulledge, J. P.; Rumbaugh, D. M. (Principal Investigator)

    1995-01-01

    Game-like computer tasks offer many benefits for psychological research. In this paper, the usefulness of such tasks to bridge population differences (e.g., age, intelligence, species) is discussed and illustrated. A task called ALVIN was used to assess humans' and monkeys' working memory for sequences of colors with or without tones. Humans repeated longer lists than did the monkeys, and only humans benefited when the visual stimuli were accompanied by auditory cues. However, the monkeys did recall sequences at levels comparable to those reported elsewhere for children. Comparison of similarities and differences between the species is possible because the two groups were tested with exactly the same game-like paradigm.

  14. Placement from community-based mental retardation programs: how well do clients do?

    PubMed

    Schalock, R L; Harper, R S

    1978-11-01

    Mentally retarded clients (N = 131) placed during a 2-year period from either an independent living or competitive employment training program were evaluated as to placement success. Thirteen percent returned to the training program. Successful independent living placement was related to intelligence and demonstrated skills in symbolic operations, personal maintenance, clothing care and use, socially appropriate behavior, and functional academics. Successful employment was related to sensorimotor, visual-auditory processing, language, and symbolic-operations skills. Major reasons for returning from a job to the competitive employment training program included inappropriate behavior or need for more training; returning from community living placement was related to money management, apartment cleanliness, social behavior, and meal preparation.

  15. Game-like tasks for comparative research: leveling the playing field.

    PubMed

    Washburn, D A; Gulledge, J P

    1995-01-01

    Game-like computer tasks offer many benefits for psychological research. In this paper, the usefulness of such tasks to bridge population differences (e.g., age, intelligence, species) is discussed and illustrated. A task called ALVIN was used to assess humans' and monkeys' working memory for sequences of colors with or without tones. Humans repeated longer lists than did the monkeys, and only humans benefited when the visual stimuli were accompanied by auditory cues. However, the monkeys did recall sequences at levels comparable to those reported elsewhere for children. Comparison of similarities and differences between the species is possible because the two groups were tested with exactly the same game-like paradigm.

  16. A laboratory study for assessing speech privacy in a simulated open-plan office.

    PubMed

    Lee, P J; Jeon, J Y

    2014-06-01

    The aim of this study is to assess speech privacy in open-plan office using two recently introduced single-number quantities: the spatial decay rate of speech, DL(2,S) [dB], and the A-weighted sound pressure level of speech at a distance of 4 m, L(p,A,S,4) m [dB]. Open-plan offices were modeled using a DL(2,S) of 4, 8, and 12 dB, and L(p,A,S,4) m was changed in three steps, from 43 to 57 dB.Auditory experiments were conducted at three locations with source–receiver distances of 8, 16, and 24 m, while background noise level was fixed at 30 dBA.A total of 20 subjects were asked to rate the speech intelligibility and listening difficulty of 240 Korean sentences in such surroundings. The speech intelligibility scores were not affected by DL(2,S) or L(p,A,S,4) m at a source–receiver distance of 8 m; however, listening difficulty ratings were significantly changed with increasing DL(2,S) and L(p,A,S,4) m values. At other locations, the influences of DL(2,S) and L(p,A,S,4) m on speech intelligibility and listening difficulty ratings were significant. It was also found that the speech intelligibility scores and listening difficulty ratings were considerably changed with increasing the distraction distance (r(D)). Furthermore, listening difficulty is more sensitive to variations in DL(2,S) and L(p,A,S,4) m than intelligibility scores for sound fields with high speech transmission performances. The recently introduced single-number quantities in the ISO standard, based on the spatial distribution of sound pressure level, were associated with speech privacy in an open-plan office. The results support single-number quantities being suitable to assess speech privacy, mainly at large distances. This new information can be considered when designing open-plan offices and making acoustic guidelines of open-plan offices.

  17. Speech enhancement based on neural networks improves speech intelligibility in noise for cochlear implant users.

    PubMed

    Goehring, Tobias; Bolner, Federico; Monaghan, Jessica J M; van Dijk, Bas; Zarowski, Andrzej; Bleeck, Stefan

    2017-02-01

    Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR). This estimate is used to attenuate noise-dominated and retain speech-dominated CI channels for electrical stimulation, as in traditional n-of-m CI coding strategies. The proposed algorithm was evaluated by measuring the speech-in-noise performance of 14 CI users using three types of background noise. Two NNSE algorithms were compared: a speaker-dependent algorithm, that was trained on the target speaker used for testing, and a speaker-independent algorithm, that was trained on different speakers. Significant improvements in the intelligibility of speech in stationary and fluctuating noises were found relative to the unprocessed condition for the speaker-dependent algorithm in all noise types and for the speaker-independent algorithm in 2 out of 3 noise types. The NNSE algorithms used noise-specific neural networks that generalized to novel segments of the same noise type and worked over a range of SNRs. The proposed algorithm has the potential to improve the intelligibility of speech in noise for CI users while meeting the requirements of low computational complexity and processing delay for application in CI devices. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Predictions of Speech Chimaera Intelligibility Using Auditory Nerve Mean-Rate and Spike-Timing Neural Cues.

    PubMed

    Wirtzfeld, Michael R; Ibrahim, Rasha A; Bruce, Ian C

    2017-10-01

    Perceptual studies of speech intelligibility have shown that slow variations of acoustic envelope (ENV) in a small set of frequency bands provides adequate information for good perceptual performance in quiet, whereas acoustic temporal fine-structure (TFS) cues play a supporting role in background noise. However, the implications for neural coding are prone to misinterpretation because the mean-rate neural representation can contain recovered ENV cues from cochlear filtering of TFS. We investigated ENV recovery and spike-time TFS coding using objective measures of simulated mean-rate and spike-timing neural representations of chimaeric speech, in which either the ENV or the TFS is replaced by another signal. We (a) evaluated the levels of mean-rate and spike-timing neural information for two categories of chimaeric speech, one retaining ENV cues and the other TFS; (b) examined the level of recovered ENV from cochlear filtering of TFS speech; (c) examined and quantified the contribution to recovered ENV from spike-timing cues using a lateral inhibition network (LIN); and (d) constructed linear regression models with objective measures of mean-rate and spike-timing neural cues and subjective phoneme perception scores from normal-hearing listeners. The mean-rate neural cues from the original ENV and recovered ENV partially accounted for perceptual score variability, with additional variability explained by the recovered ENV from the LIN-processed TFS speech. The best model predictions of chimaeric speech intelligibility were found when both the mean-rate and spike-timing neural cues were included, providing further evidence that spike-time coding of TFS cues is important for intelligibility when the speech envelope is degraded.

  19. Acoustic richness modulates the neural networks supporting intelligible speech processing.

    PubMed

    Lee, Yune-Sang; Min, Nam Eun; Wingfield, Arthur; Grossman, Murray; Peelle, Jonathan E

    2016-03-01

    The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human listeners cope with different degrees of acoustic richness during auditory sentence comprehension. Twenty-six healthy young adults underwent scanning while hearing sentences that varied in acoustic richness (high vs. low spectral detail) and syntactic complexity (subject-relative vs. object-relative center-embedded clause structures). We manipulated acoustic richness by presenting the stimuli as unprocessed full-spectrum speech, or noise-vocoded with 24 channels. Importantly, although the vocoded sentences were spectrally impoverished, all sentences were highly intelligible. These manipulations allowed us to test how intelligible speech processing was affected by orthogonal linguistic and acoustic demands. Acoustically rich speech showed stronger activation than acoustically less-detailed speech in a bilateral temporoparietal network with more pronounced activity in the right hemisphere. By contrast, listening to sentences with greater syntactic complexity resulted in increased activation of a left-lateralized network including left posterior lateral temporal cortex, left inferior frontal gyrus, and left dorsolateral prefrontal cortex. Significant interactions between acoustic richness and syntactic complexity occurred in left supramarginal gyrus, right superior temporal gyrus, and right inferior frontal gyrus, indicating that the regions recruited for syntactic challenge differed as a function of acoustic properties of the speech. Our findings suggest that the neural systems involved in speech perception are finely tuned to the type of information available, and that reducing the richness of the acoustic signal dramatically alters the brain's response to spoken language, even when intelligibility is high. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. The perception of sexuality in older adults and its relationship with cognitive functioning.

    PubMed

    Hartmans, Carien; Comijs, Hannie; Jonker, Cees

    2015-03-01

    Investigating whether cognitive functioning is associated with the perception of one's sexuality in old age. Cross-sectional analysis, using observation cycle 2005/2006 of the population-based prospective cohort of the Longitudinal Aging Study Amsterdam. Municipal registries in three Dutch regions. 1,908 older adults (mean [standard deviation] age: 71 [8.87] years; 54% women). Sexuality and intimacy were assessed using four questions. Four cognitive domains were assessed: general cognitive functioning (Mini-Mental State Examination), memory performance (Auditory Verbal Learning Test), processing speed (Coding Task), and fluid intelligence (Raven's Coloured Progressive Matrices). Multinomial regression analysis was used, with sexuality as outcome. The interaction effect between gender and sexuality was also tested. Lower fluid intelligence was associated with perceiving sexuality as unimportant; lower general cognitive functioning was associated with perceiving sexuality as unimportant; and lower immediate memory recall was associated with evaluating sexual life as unpleasant. Associations were also found between lower fluid intelligence, processing speed, and general cognitive functioning, and agreeing with sexuality no longer being important. Lower processing speed, general cognitive functioning, and delayed memory recall were associated with disagreeing with a remaining need for intimacy when getting older. Finally, the association between fluid intelligence and perceiving sexuality as important, and the association between immediate memory recall score and evaluating sexual life as pleasant, was only significant in women. The association between lower general cognitive functioning and perceiving sexuality as unimportant seemed stronger in women compared with men. Higher cognitive functioning was associated with the way in which older people perceive their current sexuality. Copyright © 2015 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. Multi-time resolution analysis of speech: evidence from psychophysics

    PubMed Central

    Chait, Maria; Greenberg, Steven; Arai, Takayuki; Simon, Jonathan Z.; Poeppel, David

    2015-01-01

    How speech signals are analyzed and represented remains a foundational challenge both for cognitive science and neuroscience. A growing body of research, employing various behavioral and neurobiological experimental techniques, now points to the perceptual relevance of both phoneme-sized (10–40 Hz modulation frequency) and syllable-sized (2–10 Hz modulation frequency) units in speech processing. However, it is not clear how information associated with such different time scales interacts in a manner relevant for speech perception. We report behavioral experiments on speech intelligibility employing a stimulus that allows us to investigate how distinct temporal modulations in speech are treated separately and whether they are combined. We created sentences in which the slow (~4 Hz; Slow) and rapid (~33 Hz; Shigh) modulations—corresponding to ~250 and ~30 ms, the average duration of syllables and certain phonetic properties, respectively—were selectively extracted. Although Slow and Shigh have low intelligibility when presented separately, dichotic presentation of Shigh with Slow results in supra-additive performance, suggesting a synergistic relationship between low- and high-modulation frequencies. A second experiment desynchronized presentation of the Slow and Shigh signals. Desynchronizing signals relative to one another had no impact on intelligibility when delays were less than ~45 ms. Longer delays resulted in a steep intelligibility decline, providing further evidence of integration or binding of information within restricted temporal windows. Our data suggest that human speech perception uses multi-time resolution processing. Signals are concurrently analyzed on at least two separate time scales, the intermediate representations of these analyses are integrated, and the resulting bound percept has significant consequences for speech intelligibility—a view compatible with recent insights from neuroscience implicating multi-timescale auditory processing. PMID:26136650

  2. Contribution of Binaural Masking Release to Improved Speech Intelligibility for different Masker types.

    PubMed

    Sutojo, Sarinah; van de Par, Steven; Schoenmaker, Esther

    2018-06-01

    In situations with competing talkers or in the presence of masking noise, speech intelligibility can be improved by spatially separating the target speaker from the interferers. This advantage is generally referred to as spatial release from masking (SRM) and different mechanisms have been suggested to explain it. One proposed mechanism to benefit from spatial cues is the binaural masking release, which is purely stimulus driven. According to this mechanism, the spatial benefit results from differences in the binaural cues of target and masker, which need to appear simultaneously in time and frequency to improve the signal detection. In an alternative proposed mechanism, the differences in the interaural cues improve the segregation of auditory streams, a process, which involves top-down processing rather than being purely stimulus driven. Other than the cues that produce binaural masking release, the interaural cue differences between target and interferer required to improve stream segregation do not have to appear simultaneously in time and frequency. This study is concerned with the contribution of binaural masking release to SRM for three masker types that differ with respect to the amount of energetic masking they exert. Speech intelligibility was measured, employing a stimulus manipulation that inhibits binaural masking release, and analyzed with a metric to account for the number of better-ear glimpses. Results indicate that the contribution of the stimulus-driven binaural masking release plays a minor role while binaural stream segregation and the availability of glimpses in the better ear had a stronger influence on improving the speech intelligibility. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. The maximum intelligible range of the human voice

    NASA Astrophysics Data System (ADS)

    Boren, Braxton

    This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.

  4. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    PubMed

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  5. Potential Mechanisms Underlying Intercortical Signal Regulation via Cholinergic Neuromodulators

    PubMed Central

    Whittington, Miles A.; Kopell, Nancy J.

    2015-01-01

    The dynamical behavior of the cortex is extremely complex, with different areas and even different layers of a cortical column displaying different temporal patterns. A major open question is how the signals from different layers and different brain regions are coordinated in a flexible manner to support function. Here, we considered interactions between primary auditory cortex and adjacent association cortex. Using a biophysically based model, we show how top-down signals in the beta and gamma regimes can interact with a bottom-up gamma rhythm to provide regulation of signals between the cortical areas and among layers. The flow of signals depends on cholinergic modulation: with only glutamatergic drive, we show that top-down gamma rhythms may block sensory signals. In the presence of cholinergic drive, top-down beta rhythms can lift this blockade and allow signals to flow reciprocally between primary sensory and parietal cortex. SIGNIFICANCE STATEMENT Flexible coordination of multiple cortical areas is critical for complex cognitive functions, but how this is accomplished is not understood. Using computational models, we studied the interactions between primary auditory cortex (A1) and association cortex (Par2). Our model is capable of replicating interaction patterns observed in vitro and the simulations predict that the coordination between top-down gamma and beta rhythms is central to the gating process regulating bottom-up sensory signaling projected from A1 to Par2 and that cholinergic modulation allows this coordination to occur. PMID:26558772

  6. Number line estimation and complex mental calculation: Is there a shared cognitive process driving the two tasks?

    PubMed

    Montefinese, Maria; Semenza, Carlo

    2018-05-17

    It is widely accepted that different number-related tasks, including solving simple addition and subtraction, may induce attentional shifts on the so-called mental number line, which represents larger numbers on the right and smaller numbers on the left. Recently, it has been shown that different number-related tasks also employ spatial attention shifts along with general cognitive processes. Here we investigated for the first time whether number line estimation and complex mental arithmetic recruit a common mechanism in healthy adults. Participants' performance in two-digit mental additions and subtractions using visual stimuli was compared with their performance in a mental bisection task using auditory numerical intervals. Results showed significant correlations between participants' performance in number line bisection and that in two-digit mental arithmetic operations, especially in additions, providing a first proof of a shared cognitive mechanism (or multiple shared cognitive mechanisms) between auditory number bisection and complex mental calculation.

  7. A 20-channel magnetoencephalography system based on optically pumped magnetometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borna, Amir; Carter, Tony R.; Goldberg, Josh D.

    In this paper, we describe a multichannel magnetoencephalography (MEG) system that uses optically pumped magnetometers (OPMs) to sense the magnetic fields of the human brain. The system consists of an array of 20 OPM channels conforming to the human subject's head, a person-sized magnetic shield containing the array and the human subject, a laser system to drive the OPM array, and various control and data acquisition systems. We conducted two MEG experiments: auditory evoked magnetic field and somatosensory evoked magnetic field, on three healthy male subjects, using both our OPM array and a 306-channel Elekta-Neuromag superconducting quantum interference device (SQUID)more » MEG system. The described OPM array measures the tangential components of the magnetic field as opposed to the radial component measured by most SQUID-based MEG systems. Finally, herein, we compare the results of the OPM- and SQUID-based MEG systems on the auditory and somatosensory data recorded in the same individuals on both systems.« less

  8. A 20-channel magnetoencephalography system based on optically pumped magnetometers

    DOE PAGES

    Borna, Amir; Carter, Tony R.; Goldberg, Josh D.; ...

    2017-10-16

    In this paper, we describe a multichannel magnetoencephalography (MEG) system that uses optically pumped magnetometers (OPMs) to sense the magnetic fields of the human brain. The system consists of an array of 20 OPM channels conforming to the human subject's head, a person-sized magnetic shield containing the array and the human subject, a laser system to drive the OPM array, and various control and data acquisition systems. We conducted two MEG experiments: auditory evoked magnetic field and somatosensory evoked magnetic field, on three healthy male subjects, using both our OPM array and a 306-channel Elekta-Neuromag superconducting quantum interference device (SQUID)more » MEG system. The described OPM array measures the tangential components of the magnetic field as opposed to the radial component measured by most SQUID-based MEG systems. Finally, herein, we compare the results of the OPM- and SQUID-based MEG systems on the auditory and somatosensory data recorded in the same individuals on both systems.« less

  9. Auditory cortex controls sound-driven innate defense behaviour through corticofugal projections to inferior colliculus

    PubMed Central

    Xiong, Xiaorui R.; Liang, Feixue; Zingg, Brian; Ji, Xu-ying; Ibrahim, Leena A.; Tao, Huizhong W.; Zhang, Li I.

    2015-01-01

    Defense against environmental threats is essential for animal survival. However, the neural circuits responsible for transforming unconditioned sensory stimuli and generating defensive behaviours remain largely unclear. Here, we show that corticofugal neurons in the auditory cortex (ACx) targeting the inferior colliculus (IC) mediate an innate, sound-induced flight behaviour. Optogenetic activation of these neurons, or their projection terminals in the IC, is sufficient for initiating flight responses, while the inhibition of these projections reduces sound-induced flight responses. Corticocollicular axons monosynaptically innervate neurons in the cortex of the IC (ICx), and optogenetic activation of the projections from the ICx to the dorsal periaqueductal gray is sufficient for provoking flight behaviours. Our results suggest that ACx can both amplify innate acoustic-motor responses and directly drive flight behaviours in the absence of sound input through corticocollicular projections to ICx. Such corticofugal control may be a general feature of innate defense circuits across sensory modalities. PMID:26068082

  10. Design of a robotic vehicle with self-contained intelligent wheels

    NASA Astrophysics Data System (ADS)

    Poulson, Eric A.; Jacob, John S.; Gunderson, Robert W.; Abbott, Ben A.

    1998-08-01

    The Center for Intelligent Systems has developed a small robotic vehicle named the Advanced Rover Chassis 3 (ARC 3) with six identical intelligent wheel units attached to a payload via a passive linkage suspension system. All wheels are steerable, so the ARC 3 can move in any direction while rotating at any rate allowed by the terrain and motors. Each intelligent wheel unit contains a drive motor, steering motor, batteries, and computer. All wheel units are identical, so manufacturing, programing, and spare replacement are greatly simplified. The intelligent wheel concept would allow the number and placement of wheels on the vehicle to be changed with no changes to the control system, except to list the position of all the wheels relative to the vehicle center. The task of controlling the ARC 3 is distributed between one master computer and the wheel computers. Tasks such as controlling the steering motors and calculating the speed of each wheel relative to the vehicle speed in a corner are dependent on the location of a wheel relative to the vehicle center and ar processed by the wheel computers. Conflicts between the wheels are eliminated by computing the vehicle velocity control in the master computer. Various approaches to this distributed control problem, and various low level control methods, have been explored.

  11. Designing feedback to mitigate teen distracted driving: A social norms approach.

    PubMed

    Merrikhpour, Maryam; Donmez, Birsen

    2017-07-01

    The purpose of this research is to investigate teens' perceived social norms and whether providing normative information can reduce distracted driving behaviors among them. Parents are among the most important social referents for teens; they have significant influences on teens' driving behaviors, including distracted driving which significantly contributes to teens' crash risks. Social norms interventions have been successfully applied in various domains including driving; however, this approach is yet to be explored for mitigating driver distraction among teens. Forty teens completed a driving simulator experiment while performing a self-paced visual-manual secondary task in four between-subject conditions: a) social norms feedback that provided a report at the end of each drive on teens' distracted driving behavior, comparing their distraction engagement to their parent's, b) post-drive feedback that provided just the report on teens' distracted driving behavior without information on their parents, c) real-time feedback in the form of auditory warnings based on eyes of road-time, and d) no feedback as control. Questionnaires were administered to collect data on these teens' and their parents' self-reported engagement in driver distractions and the associated social norms. Social norms and real-time feedback conditions resulted in significantly smaller average off-road glance duration, rate of long (>2s) off-road glances, and standard deviation of lane position compared to no feedback. Further, social norms feedback decreased brake response time and percentage of time not looking at the road compared to no feedback. No major effect was observed for post-drive feedback. Questionnaire results suggest that teens appeared to overestimate parental norms, but no effect of feedback was found on their perceptions. Feedback systems that leverage social norms can help mitigate driver distraction among teens. Overall, both social norms and real-time feedback induced positive driving behaviors, with social norms feedback outperforming real-time feedback. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Response, Emergency Staging, Communications, Uniform Management, and Evacuation (R.E.S.C.U.M.E.) : report on functional and performance requirements, and high-level data and communication needs.

    DOT National Transportation Integrated Search

    1995-06-01

    INTELLIGENT VEHICLE INITIATIVE OR IVI ABSTRACT THE GOAL OF THE TRAVTEK CAMERA CAR STUDY WAS TO FURNISH A DETAILED EVALUATION OF DRIVING AND NAVIGATION PERFORMANCE, SYSTEM USABILITY, AND SAFETY FOR THE TRAVTEK SYSTEM. TO ACHIEVE THIS GOAL, AN INSTRUME...

  13. SLA: A Time for New Initiatives-- CI Division Formed, Fundraising Goal Set, Certification Coming

    ERIC Educational Resources Information Center

    DiMattia, Susan; Blumenstein, Lynn

    2004-01-01

    Several initiatives were announced at the Special Libraries Association (d.b.a. SLA) conference, June 4-9, at the Gaylord Opryland Hotel in Nashville. A petition drive resulted in formation of a Competitive Intelligence (CI) Division from the former CI Section of the Leadership and Management Division. The board encouraged the member-driven…

  14. The Relationships of Working Memory, Secondary Memory, and General Fluid Intelligence: Working Memory Is Special

    ERIC Educational Resources Information Center

    Shelton, Jill Talley; Elliott, Emily M.; Matthews, Russell A.; Hill, B. D.; Gouvier, Wm. Drew

    2010-01-01

    Recent efforts have been made to elucidate the commonly observed link between working memory and reasoning ability. The results have been inconsistent, with some work suggesting that the emphasis placed on retrieval from secondary memory by working memory tests is the driving force behind this association (Mogle, Lovett, Stawski, & Sliwinski,…

  15. Auditory processing assessment suggests that Wistar audiogenic rat neural networks are prone to entrainment.

    PubMed

    Pinto, Hyorrana Priscila Pereira; Carvalho, Vinícius Rezende; Medeiros, Daniel de Castro; Almeida, Ana Flávia Santos; Mendes, Eduardo Mazoni Andrade Marçal; Moraes, Márcio Flávio Dutra

    2017-04-07

    Epilepsy is a neurological disease related to the occurrence of pathological oscillatory activity, but the basic physiological mechanisms of seizure remain to be understood. Our working hypothesis is that specific sensory processing circuits may present abnormally enhanced predisposition for coordinated firing in the dysfunctional brain. Such facilitated entrainment could share a similar mechanistic process as those expediting the propagation of epileptiform activity throughout the brain. To test this hypothesis, we employed the Wistar audiogenic rat (WAR) reflex animal model, which is characterized by having seizures triggered reliably by sound. Sound stimulation was modulated in amplitude to produce an auditory steady-state-evoked response (ASSR; -53.71Hz) that covers bottom-up and top-down processing in a time scale compatible with the dynamics of the epileptic condition. Data from inferior colliculus (IC) c-Fos immunohistochemistry and electrographic recordings were gathered for both the control Wistar group and WARs. Under 85-dB SLP auditory stimulation, compared to controls, the WARs presented higher number of Fos-positive cells (at IC and auditory temporal lobe) and a significant increase in ASSR-normalized energy. Similarly, the 110-dB SLP sound stimulation also statistically increased ASSR-normalized energy during ictal and post-ictal periods. However, at the transition from the physiological to pathological state (pre-ictal period), the WAR ASSR analysis demonstrated a decline in normalized energy and a significant increase in circular variance values compared to that of controls. These results indicate an enhanced coordinated firing state for WARs, except immediately before seizure onset (suggesting pre-ictal neuronal desynchronization with external sensory drive). These results suggest a competing myriad of interferences among different networks that after seizure onset converge to a massive oscillatory circuit. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Distribution of dilemma zone after intelligent transportation system established

    NASA Astrophysics Data System (ADS)

    Deng, Yuanchang; Yang, Huiqin; Wu, Linying

    2017-03-01

    Dilemma zone refers to an area where vehicles can neither clear the intersection during the yellow interval nor stop safely before the stop line. The purpose of this paper is to analyzing the distribution of two types of dilemma zone after intelligent transportation system (ITS) established at Outer Ring Roads signalized intersections in Guangzhou Higher Education Mega Center. To collect field data a drone aircraft was used. When calculating the type II dilemma zone's distribution, we considered the information of drivers' aggressiveness, which was classified by driving speed and type I dilemma zone as well. We also compared the two types dilemma zone's distribution before and after ITS established and analyzed the changes, which was brought by ITS.

  17. Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks

    NASA Astrophysics Data System (ADS)

    Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.

    Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.

  18. Universality of accelerating change

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Shlesinger, Michael F.

    2018-03-01

    On large time scales the progress of human technology follows an exponential growth trend that is termed accelerating change. The exponential growth trend is commonly considered to be the amalgamated effect of consecutive technology revolutions - where the progress carried in by each technology revolution follows an S-curve, and where the aging of each technology revolution drives humanity to push for the next technology revolution. Thus, as a collective, mankind is the 'intelligent designer' of accelerating change. In this paper we establish that the exponential growth trend - and only this trend - emerges universally, on large time scales, from systems that combine together two elements: randomness and amalgamation. Hence, the universal generation of accelerating change can be attained by systems with no 'intelligent designer'.

  19. Age-group differences in speech identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition

    PubMed Central

    Füllgrabe, Christian; Moore, Brian C. J.; Stone, Michael A.

    2015-01-01

    Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60–79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125–6 kHz were matched to nine young (YNH; 18–27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5–180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric sensitivity. PMID:25628563

  20. Effects of Non-Driving Related Task Modalities on Takeover Performance in Highly Automated Driving.

    PubMed

    Wandtner, Bernhard; Schömig, Nadja; Schmidt, Gerald

    2018-04-01

    Aim of the study was to evaluate the impact of different non-driving related tasks (NDR tasks) on takeover performance in highly automated driving. During highly automated driving, it is allowed to engage in NDR tasks temporarily. However, drivers must be able to take over control when reaching a system limit. There is evidence that the type of NDR task has an impact on takeover performance, but little is known about the specific task characteristics that account for performance decrements. Thirty participants drove in a simulator using a highly automated driving system. Each participant faced five critical takeover situations. Based on assumptions of Wickens's multiple resource theory, stimulus and response modalities of a prototypical NDR task were systematically manipulated. Additionally, in one experimental group, the task was locked out simultaneously with the takeover request. Task modalities had significant effects on several measures of takeover performance. A visual-manual texting task degraded performance the most, particularly when performed handheld. In contrast, takeover performance with an auditory-vocal task was comparable to a baseline without any task. Task lockout was associated with faster hands-on-wheel times but not altered brake response times. Results showed that NDR task modalities are relevant factors for takeover performance. An NDR task lockout was highly accepted by the drivers and showed moderate benefits for the first takeover reaction. Knowledge about the impact of NDR task characteristics is an enabler for adaptive takeover concepts. In addition, it might help regulators to make decisions on allowed NDR tasks during automated driving.

  1. Functional modeling of the human auditory brainstem response to broadband stimulationa)

    PubMed Central

    Verhulst, Sarah; Bharadwaj, Hari M.; Mehraei, Golbarg; Shera, Christopher A.; Shinn-Cunningham, Barbara G.

    2015-01-01

    Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2–2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities. PMID:26428802

  2. [Effect of fatigue on the fitness to drive].

    PubMed

    Makowiec-Dabrowska, Teresa; Bortkiewicz, Alicja; Siedlecka, Jadwiga; Gadzicka, Elzbieta

    2011-01-01

    The 1995 U.S. Department of Transportation files contain a statement that driver fatigue has been a major problem among road vehicle professional drivers, while the consequences of participation in public road traffic of drivers affected by fatigue represent a serious threat to the public safety. Therefore, studies on the causes and consequences of fatigue in drivers are of significant practical value. The authors of this work discuss definitions of fatigue and fatigue classifications relative to the location of the functional changes (physical and mental fatigue, general and local - muscular, ocular, auditory) and relative to intensity (acute, sub-acute, chronic fatigue and weariness), and duration Particular attention has been paid to the factors contributing to fatigue in drivers. These may be classified into two groups: 1. sleep-related (SR), i.e. cumulative sleep deficit, long wake time, and time of the day; 2. task-related (TR), i.e. factors related with vehicle driving and working (driving) time. Studies on the effect of fatigue on driving performance (longer reaction time, poorer vigilance, slower information processing, impaired recent memory) have been analyzed. The major effect of driver fatigue is that he/she becomes gradually diverted from the road and road traffic, with the resultant poorer driving performance. Thus, the effects of fatigue in a driver are comparable to those after alcohol intake. This paper also discusses the methods used to counteract and prevent fatigue.

  3. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology

    PubMed Central

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2012-01-01

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339

  4. Acoustic assessment of speech privacy curtains in two nursing units

    PubMed Central

    Pope, Diana S.; Miller-Klein, Erik T.

    2016-01-01

    Hospitals have complex soundscapes that create challenges to patient care. Extraneous noise and high reverberation rates impair speech intelligibility, which leads to raised voices. In an unintended spiral, the increasing noise may result in diminished speech privacy, as people speak loudly to be heard over the din. The products available to improve hospital soundscapes include construction materials that absorb sound (acoustic ceiling tiles, carpet, wall insulation) and reduce reverberation rates. Enhanced privacy curtains are now available and offer potential for a relatively simple way to improve speech privacy and speech intelligibility by absorbing sound at the hospital patient's bedside. Acoustic assessments were performed over 2 days on two nursing units with a similar design in the same hospital. One unit was built with the 1970s’ standard hospital construction and the other was newly refurbished (2013) with sound-absorbing features. In addition, we determined the effect of an enhanced privacy curtain versus standard privacy curtains using acoustic measures of speech privacy and speech intelligibility indexes. Privacy curtains provided auditory protection for the patients. In general, that protection was increased by the use of enhanced privacy curtains. On an average, the enhanced curtain improved sound absorption from 20% to 30%; however, there was considerable variability, depending on the configuration of the rooms tested. Enhanced privacy curtains provide measureable improvement to the acoustics of patient rooms but cannot overcome larger acoustic design issues. To shorten reverberation time, additional absorption, and compact and more fragmented nursing unit floor plate shapes should be considered. PMID:26780959

  5. Longitudinal follow-up to evaluate speech disorders in early-treated patients with infantile-onset Pompe disease.

    PubMed

    Zeng, Yin-Ting; Hwu, Wuh-Liang; Torng, Pao-Chuan; Lee, Ni-Chung; Shieh, Jeng-Yi; Lu, Lu; Chien, Yin-Hsiu

    2017-05-01

    Patients with infantile-onset Pompe disease (IOPD) can be treated by recombinant human acid alpha glucosidase (rhGAA) replacement beginning at birth with excellent survival rates, but they still commonly present with speech disorders. This study investigated the progress of speech disorders in these early-treated patients and ascertained the relationship with treatments. Speech disorders, including hypernasal resonance, articulation disorders, and speech intelligibility, were scored by speech-language pathologists using auditory perception in seven early-treated patients over a period of 6 years. Statistical analysis of the first and last evaluations of the patients was performed with the Wilcoxon signed-rank test. A total of 29 speech samples were analyzed. All the patients suffered from hypernasality, articulation disorder, and impairment in speech intelligibility at the age of 3 years. The conditions were stable, and 2 patients developed normal or near normal speech during follow-up. Speech therapy and a high dose of rhGAA appeared to improve articulation in 6 of the 7 patients (86%, p = 0.028) by decreasing the omission of consonants, which consequently increased speech intelligibility (p = 0.041). Severity of hypernasality greatly reduced only in 2 patients (29%, p = 0.131). Speech disorders were common even in early and successfully treated patients with IOPD; however, aggressive speech therapy and high-dose rhGAA could improve their speech disorders. Copyright © 2016 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  6. Acoustic assessment of speech privacy curtains in two nursing units.

    PubMed

    Pope, Diana S; Miller-Klein, Erik T

    2016-01-01

    Hospitals have complex soundscapes that create challenges to patient care. Extraneous noise and high reverberation rates impair speech intelligibility, which leads to raised voices. In an unintended spiral, the increasing noise may result in diminished speech privacy, as people speak loudly to be heard over the din. The products available to improve hospital soundscapes include construction materials that absorb sound (acoustic ceiling tiles, carpet, wall insulation) and reduce reverberation rates. Enhanced privacy curtains are now available and offer potential for a relatively simple way to improve speech privacy and speech intelligibility by absorbing sound at the hospital patient's bedside. Acoustic assessments were performed over 2 days on two nursing units with a similar design in the same hospital. One unit was built with the 1970s' standard hospital construction and the other was newly refurbished (2013) with sound-absorbing features. In addition, we determined the effect of an enhanced privacy curtain versus standard privacy curtains using acoustic measures of speech privacy and speech intelligibility indexes. Privacy curtains provided auditory protection for the patients. In general, that protection was increased by the use of enhanced privacy curtains. On an average, the enhanced curtain improved sound absorption from 20% to 30%; however, there was considerable variability, depending on the configuration of the rooms tested. Enhanced privacy curtains provide measureable improvement to the acoustics of patient rooms but cannot overcome larger acoustic design issues. To shorten reverberation time, additional absorption, and compact and more fragmented nursing unit floor plate shapes should be considered.

  7. Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography

    PubMed Central

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  8. Analyzing Vehicle Fuel Saving Opportunities through Intelligent Driver Feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonder, J.; Earleywine, M.; Sparks, W.

    2012-06-01

    Driving style changes, e.g., improving driver efficiency and motivating driver behavior changes, could deliver significant petroleum savings. This project examines eliminating stop-and-go driving and unnecessary idling, and also adjusting acceleration rates and cruising speeds to ideal levels to quantify fuel savings. Such extreme adjustments can result in dramatic fuel savings of over 30%, but would in reality only be achievable through automated control of vehicles and traffic flow. In real-world driving, efficient driving behaviors could reduce fuel use by 20% on aggressively driven cycles and by 5-10% on more moderately driven trips. A literature survey was conducted of driver behaviormore » influences, and pertinent factors from on-road experiments with different driving styles were observed. This effort highlighted important driver influences such as surrounding vehicle behavior, anxiety over trying to get somewhere quickly, and the power/torque available from the vehicle. Existing feedback approaches often deliver efficiency information and instruction. Three recommendations for maximizing fuel savings from potential drive cycle improvement are: (1) leveraging applications with enhanced incentives, (2) using an approach that is easy and widely deployable to motivate drivers, and (3) utilizing connected vehicle and automation technologies to achieve large and widespread efficiency improvements.« less

  9. The Relationship between Central Auditory Processing, Language, and Cognition in Children Being Evaluated for Central Auditory Processing Disorder.

    PubMed

    Brenneman, Lauren; Cash, Elizabeth; Chermak, Gail D; Guenette, Linda; Masters, Gay; Musiek, Frank E; Brown, Mallory; Ceruti, Julianne; Fitzegerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Weihing, Jeffrey

    2017-09-01

    Pediatric central auditory processing disorder (CAPD) is frequently comorbid with other childhood disorders. However, few studies have examined the relationship between commonly used CAPD, language, and cognition tests within the same sample. The present study examined the relationship between diagnostic CAPD tests and "gold standard" measures of language and cognitive ability, the Clinical Evaluation of Language Fundamentals (CELF) and the Wechsler Intelligence Scale for Children (WISC). A retrospective study. Twenty-seven patients referred for CAPD testing who scored average or better on the CELF and low average or better on the WISC were initially included. Seven children who scored below the CELF and/or WISC inclusion criteria were then added to the dataset for a second analysis, yielding a sample size of 34. Participants were administered a CAPD battery that included at least the following three CAPD tests: Frequency Patterns (FP), Dichotic Digits (DD), and Competing Sentences (CS). In addition, they were administered the CELF and WISC. Relationships between scores on CAPD, language (CELF), and cognition (WISC) tests were examined using correlation analysis. DD and FP showed significant correlations with Full Scale Intelligence Quotient, and the DD left ear and the DD interaural difference measures both showed significant correlations with working memory. However, ∼80% or more of the variance in these CAPD tests was unexplained by language and cognition measures. Language and cognition measures were more strongly correlated with each other than were the CAPD tests with any CELF or WISC scale. Additional correlations with the CAPD tests were revealed when patients who scored in the mild-moderate deficit range on the CELF and/or in the borderline low intellectual functioning range on the WISC were included in the analysis. While both the DD and FP tests showed significant correlations with one or more cognition measures, the majority of the variance in these CAPD measures went unexplained by cognition. Unlike DD and FP, the CS test was not correlated with cognition. Additionally, language measures were not significantly correlated with any of the CAPD tests. Our findings emphasize that the outcomes and interpretation of results vary as a function of the subject inclusion criteria that are applied for the CELF and WISC. Including participants with poorer cognition and/or language scores increased the number of significant correlations observed. For this reason, it is important that studies investigating the relationship between CAPD and other domains or disorders report the specific inclusion criteria used for all tests. American Academy of Audiology

  10. Identification of mine rescue equipment reduction gears technical condition

    NASA Astrophysics Data System (ADS)

    Gerike, B. L.; Klishin, V. I.; Kuzin, E. G.

    2017-09-01

    The article presents the reasons for adopting intelligent service of mine belt conveyer drives concerning evaluation of their technical condition based on the diagnostic techniques instead of regular preventative maintenance. The article reveals the diagnostic results of belt conveyer drive reduction gears condition taking into account the parameters of lubricating oil, vibration and temperature. Usage of a complex approach to evaluate technical conditions allows reliability of the forecast to be improved, which makes it possible not only to prevent accidental breakdowns and eliminate unscheduled downtime, but also to bring sufficient economic benefits through reduction of the term and scope of work during overhauls.

  11. Design of automatic curtain controlled by wireless based on single chip 51 microcomputer

    NASA Astrophysics Data System (ADS)

    Han, Dafeng; Chen, Xiaoning

    2017-08-01

    In order to realize the wireless control of the domestic intelligent curtains, a set of wireless intelligent curtain control system based on 51 single chip microcomputer have been designed in this paper. The intelligent curtain can work in the manual mode, automatic mode and sleep mode and can be carried out by the button and mobile phone APP mode loop switch. Through the photosensitive resistance module and human pyroelectric infrared sensor to collect the indoor light value and the data whether there is the person in the room, and then after single chip processing, the motor drive module is controlled to realize the positive inversion of the asynchronous motor, the intelligent opening and closing of the curtain have been realized. The operation of the motor can be stopped under the action of the switch and the curtain opening and closing and timing switch can be controlled through the keys and mobile phone APP. The optical fiber intensity, working mode, curtain state and system time are displayed by LCD1602. The system has a high reliability and security under practical testing and with the popularity and development of smart home, the design has broad market prospects.

  12. The potential impact of intelligent power wheelchair use on social participation: perspectives of users, caregivers and clinicians.

    PubMed

    Rushton, Paula W; Kairy, Dahlia; Archambault, Philippe; Pituch, Evelina; Torkia, Caryne; El Fathi, Anas; Stone, Paula; Routhier, François; Forget, Robert; Pineau, Joelle; Gourdeau, Richard; Demers, Louise

    2015-05-01

    To explore power wheelchair users', caregivers' and clinicians' perspectives regarding the potential impact of intelligent power wheelchair use on social participation. Semi-structured interviews were conducted with power wheelchair users (n = 12), caregivers (n = 4) and clinicians (n = 12). An illustrative video was used to facilitate discussion. The transcribed interviews were analyzed using thematic analysis. Three main themes were identified based on the experiences of the power wheelchair users, caregivers and clinicians: (1) increased social participation opportunities, (2) changing how social participation is experienced and (3) decreased risk of accidents during social participation. Findings from this study suggest that an intelligent power wheelchair would enhance social participation in a variety of important ways, thereby providing support for continued design and development of this assistive technology. An intelligent power wheelchair has the potential to: Increase social participation opportunities by overcoming challenges associated with navigating through crowds and small spaces. Change how social participation is experienced through "normalizing" social interactions and decreasing the effort required to drive a power wheelchair. Decrease the risk of accidents during social participation by reducing the need for dangerous compensatory strategies and minimizing the impact of the physical environment.

  13. Using neuropsychological profiles to classify neglected children with or without physical abuse.

    PubMed

    Nolin, Pierre; Ethier, Louise

    2007-06-01

    The aim of this study is twofold: First, to investigate whether cognitive functions can contribute to differentiating neglected children with or without physical abuse compared to comparison participants; second, to demonstrate the detrimental impact of children being victimized by a combination of different types of maltreatment. Seventy-nine children aged 6-12 years and currently receiving Child Protection Services because of one of two types of maltreatment (neglect with physical abuse, n=56; neglect without physical abuse, n=28) were compared with a control group of 53 children matched for age, gender, and annual family income. The neuropsychological assessment focused on motor performance, attention, memory and learning, visual-motor integration, language, frontal/executive functions, and intelligence. Discriminant analysis identified auditory attention and response set, and visual-motor integration (Function 1), and problem solving, abstraction, and planning (Function 2) as the two sets of variables that most distinguished the groups. Discriminant analysis predicted group membership in 80% of the cases. Children who were neglected with physical abuse showed cognitive deficits in auditory attention and response set, and visual-motor integration (Function 1) and problem solving, abstraction, and planning (Function 2). Children who were neglected without physical abuse differed from the control group in that they obtained lower scores in auditory attention and response set, and visual-motor integration (Function 1). Surprisingly, these same children demonstrated a greater capacity for problem solving, abstraction, and planning (Function 2) than the physically abused neglected and control children. The present study underscores the relevance of neuropsychology to maltreatment research. The results support the heterogeneity of cognitive deficits in children based on different types of maltreatment and the fact that neglect with physical abuse is more harmful than neglect alone.

  14. Linguistic processing in idiopathic generalized epilepsy: an auditory event-related potential study.

    PubMed

    Henkin, Yael; Kishon-Rabin, Liat; Pratt, Hillel; Kivity, Sara; Sadeh, Michelle; Gadoth, Natan

    2003-09-01

    Auditory processing of increasing acoustic and linguistic complexity was assessed in children with idiopathic generalized epilepsy (IGE) by using auditory event-related potentials (AERPs) as well as reaction time and performance accuracy. Twenty-four children with IGE [12 with generalized tonic-clonic seizures (GTCSs), and 12 with absence seizures (ASs)] with average intelligence and age-appropriate scholastic skills, uniformly medicated with valproic acid (VPA), and 20 healthy controls, performed oddball discrimination tasks that consisted of the following stimuli: (a) pure tones; (b) nonmeaningful monosyllables that differed by their phonetic features (i.e., phonetic stimuli); and (c) meaningful monosyllabic words from two semantic categories (i.e., semantic stimuli). AERPs elicited by nonlinguistic stimuli were similar in healthy and epilepsy children, whereas those elicited by linguistic stimuli (i.e., phonetic and semantic) differed significantly in latency, amplitude, and scalp distribution. In children with GTCSs, phonetic and semantic processing were characterized by slower processing time, manifested by prolonged N2 and P3 latencies during phonetic processing, and prolongation of all AERPs latencies during semantic processing. In children with ASs, phonetic and semantic processing were characterized by increased allocation of attentional resources, manifested by enhanced N2 amplitudes. Semantic processing also was characterized by prolonged P3 latency. In both patient groups, processing of linguistic stimuli resulted in different patterns of brain-activity lateralization compared with that in healthy controls. Reaction time and performance accuracy did not differ among the study groups. AERPs exposed linguistic-processing deficits related to seizure type in children with IGE. Neurologic follow-up should therefore include evaluation of linguistic functions, and remedial intervention should be provided, accordingly.

  15. Cochlear implant rehabilitation outcomes in Waardenburg syndrome children.

    PubMed

    de Sousa Andrade, Susana Margarida; Monteiro, Ana Rita Tomé; Martins, Jorge Humberto Ferreira; Alves, Marisa Costa; Santos Silva, Luis Filipe; Quadros, Jorge Manuel Cardoso; Ribeiro, Carlos Alberto Reis

    2012-09-01

    The purpose of this study was to review the outcomes of children with documented Waardenburg syndrome implanted in the ENT Department of Centro Hospitalar de Coimbra, concerning postoperative speech perception and production, in comparison to the rest of non-syndromic implanted children. A retrospective chart review was performed for children congenitally deaf who had undergone cochlear implantation with multichannel implants, diagnosed as having Waardenburg syndrome, between 1992 and 2011. Postoperative performance outcomes were assessed and confronted with results obtained by children with non-syndromic congenital deafness also implanted in our department. Open-set auditory perception skills were evaluated by using European Portuguese speech discrimination tests (vowels test, monosyllabic word test, number word test and words in sentence test). Meaningful auditory integration scales (MAIS) and categories of auditory performance (CAP) were also measured. Speech production was further assessed and included results on meaningful use of speech Scale (MUSS) and speech intelligibility rating (SIR). To date, 6 implanted children were clinically identified as having WS type I, and one met the diagnosis of type II. All WS children received multichannel cochlear implants, with a mean age at implantation of 30.6±9.7months (ranging from 19 to 42months). Postoperative outcomes in WS children were similar to other nonsyndromic children. In addition, in number word and vowels discrimination test WS group showed slightly better performances, as well as in MUSS and MAIS assessment. Our study has shown that cochlear implantation should be considered a rehabilitative option for Waardenburg syndrome children with profound deafness, enabling the development and improvement of speech perception and production abilities in this group of patients, reinforcing their candidacy for this audio-oral rehabilitation method. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Disproportionately severe memory deficit in relation to normal intellectual functioning after closed head injury.

    PubMed Central

    Levin, H S; Goldstein, F C; High, W M; Eisenberg, H M

    1988-01-01

    The presence of disproportionate memory impairment with relatively preserved intellectual functioning was examined in 87 survivors of moderate or severe closed head injury. Approximately one-fourth of the patients tested at 5 to 15 and/or 16 to 42 months after injury manifested defective memory on both auditory and pictorial measures despite obtaining Wechsler Verbal and Performance Intelligence Quotients within the average range. The findings indicate that disproportionately severe memory deficit persists in a subgroup of closed head injured survivors which is reminiscent in some cases of the amnesic disturbance arising from other causes. Evaluation of long term memory in relation to cognitive ability could potentially identify important distinctions for prognosis and rehabilitation in head injured patients. PMID:3225586

  17. 2016 Summer Series - Terry Fong - Planetary Exploration Reinvented

    NASA Image and Video Library

    2016-07-07

    The allure of deep space drives humanity’s curiosity to further explore the universe, but the risks associated with spaceflight are still limiting. Technological advancements in robotics and data processing are pushing the envelope of Human planetary exploration and habitation. Dr. Terry Fong from the NASA Ames’ Intelligent Robotics Group will describe how we are reinventing the approach to explore the universe.

  18. NASA Technology Transfer - Human Robot Teaming

    NASA Image and Video Library

    2016-12-23

    Produced for Intelligent Robotics Group to show at January 2017 Consumer Electronics Show (CES). Highlights development of VERVE (Visual Environment for Remote Virtual Exploration) software used on K-10, K-REX, SPHERES and AstroBee projects for 3D awareness. Also mentions transfer of software to Nissan for their development in their Autonomous Vehicle project. Video includes Nissan's self-driving car around NASA Ames.

  19. Binaural model-based dynamic-range compression.

    PubMed

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  20. Perception of speech in reverberant conditions using AM-FM cochlear implant simulation.

    PubMed

    Drgas, Szymon; Blaszak, Magdalena A

    2010-10-01

    This study assessed the effects of speech misidentification and cognitive processing errors in normal-hearing adults listening to degraded auditory input signals simulating cochlear implants in reverberation conditions. Three variables were controlled: number of vocoder channels (six and twelve), instantaneous frequency change rate (none, 50, 400 Hz), and enclosures (different reverberation conditions). The analyses were made on the basis of: (a) nonsense word recognition scores for eight young normal-hearing listeners, (b) 'ease of listening' based on the time of response, and (c) the subjective measure of difficulty. The maximum score of speech intelligibility in cochlear implant simulation was 70% for non-reverberant conditions with a 12-channel vocoder and changes of instantaneous frequency limited to 400 Hz. In the presence of reflections, word misidentification was about 10-20 percentage points higher. There was little difference between the 50 and 400 Hz frequency modulation cut-off for the 12-channel vocoder; however, in the case of six channels this difference was more significant. The results of the experiment suggest that the information other than F0, that is carried by FM, can be sufficient to improve speech intelligibility in the real-world conditions.

  1. Suppressed Alpha Oscillations Predict Intelligibility of Speech and its Acoustic Details

    PubMed Central

    Weisz, Nathan

    2012-01-01

    Modulations of human alpha oscillations (8–13 Hz) accompany many cognitive processes, but their functional role in auditory perception has proven elusive: Do oscillatory dynamics of alpha reflect acoustic details of the speech signal and are they indicative of comprehension success? Acoustically presented words were degraded in acoustic envelope and spectrum in an orthogonal design, and electroencephalogram responses in the frequency domain were analyzed in 24 participants, who rated word comprehensibility after each trial. First, the alpha power suppression during and after a degraded word depended monotonically on spectral and, to a lesser extent, envelope detail. The magnitude of this alpha suppression exhibited an additional and independent influence on later comprehension ratings. Second, source localization of alpha suppression yielded superior parietal, prefrontal, as well as anterior temporal brain areas. Third, multivariate classification of the time–frequency pattern across participants showed that patterns of late posterior alpha power allowed best for above-chance classification of word intelligibility. Results suggest that both magnitude and topography of late alpha suppression in response to single words can indicate a listener's sensitivity to acoustic features and the ability to comprehend speech under adverse listening conditions. PMID:22100354

  2. Implications of intelligent, integrated microsystems for product design and development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MYERS,DAVID R.; MCWHORTER,PAUL J.

    2000-04-19

    Intelligent, integrated microsystems combine some or all of the functions of sensing, processing information, actuation, and communication within a single integrated package, and preferably upon a single silicon chip. As the elements of these highly integrated solutions interact strongly with each other, the microsystem can be neither designed nor fabricated piecemeal, in contrast to the more familiar assembled products. Driven by technological imperatives, microsystems will best be developed by multi-disciplinary teams, most likely within the flatter, less hierarchical organizations. Standardization of design and process tools around a single, dominant technology will expedite economically viable operation under a common production infrastructure.more » The production base for intelligent, integrated microsystems has elements in common with the mathematical theory of chaos. Similar to chaos theory, the development of microsystems technology will be strongly dependent on, and optimized to, the initial product requirements that will drive standardization--thereby further rewarding early entrants to integrated microsystem technology.« less

  3. Collective intelligence for translational medicine: Crowdsourcing insights and innovation from an interdisciplinary biomedical research community.

    PubMed

    Budge, Eleanor Jane; Tsoti, Sandra Maria; Howgate, Daniel James; Sivakumar, Shivan; Jalali, Morteza

    2015-01-01

    Translational medicine bridges the gap between discoveries in biomedical science and their safe and effective clinical application. Despite the gross opportunity afforded by modern research for unparalleled advances in this field, the process of translation remains protracted. Efforts to expedite science translation have included the facilitation of interdisciplinary collaboration within both academic and clinical environments in order to generate integrated working platforms fuelling the sharing of knowledge, expertise, and tools to align biomedical research with clinical need. However, barriers to scientific translation remain, and further progress is urgently required. Collective intelligence and crowdsourcing applications offer the potential for global online networks, allowing connection and collaboration between a wide variety of fields. This would drive the alignment of biomedical science with biotechnology, clinical need, and patient experience, in order to deliver evidence-based innovation which can revolutionize medical care worldwide. Here we discuss the critical steps towards implementing collective intelligence in translational medicine using the experience of those in other fields of science and public health.

  4. Wireless Control of Smartphones with Tongue Motion Using Tongue Drive Assistive Technology

    PubMed Central

    Kim, Jeonghee; Huo, Xueliang

    2010-01-01

    Tongue Drive System (TDS) is a noninvasive, wireless and wearable assistive technology that helps people with severe disabilities control their environments using their tongue motion. TDS translates specific tongue gestures to commands by detecting a small permanent magnetic tracer on the users’ tongue. We have linked the TDS to a smartphone (iPhone/iPod Touch) with a customized wireless module, added to the iPhone. We also migrated and ran the TDS sensor signal processing algorithm and graphical user interface on the iPhone in real time. The TDS-iPhone interface was evaluated by four able-bodied subjects for dialing 10-digit phone numbers using the standard telephone keypad and three methods of prompting the numbers: visual, auditory, and cognitive. Preliminary results showed that the interface worked quite reliably at a rate of 15.4 digits per minute, on average, with negligible errors. PMID:21096049

  5. How are Inner Hair Cells Stimulated? Evidence for multiple mechanical drives

    PubMed Central

    Guinan, John J.

    2013-01-01

    Recent studies indicate that the gap over outer hair cells (OHCs) between the reticular lamina (RL) and the tectorial membrane (TM) varies cyclically during low-frequency sounds. Variation in the RL-TM gap produces radial fluid flow in the gap that can drive inner hair cell (IHC) stereocilia. Analysis of RL-TM gap changes reveals three IHC drives in addition to classic SHEAR. For upward basilar-membrane (BM) motion, IHC stereocilia are deflected in the excitatory direction by SHEAR and OHC-MOTILITY, but in the inhibitory direction by TM-PUSH and CILIA-SLANT. Upward BM motion causes OHC somatic contraction which tilts the RL, compresses the RL-TM gap over IHCs and expands the RL-TM gap over OHCs, thereby producing an outward (away from the IHCs) radial fluid flow which is the OHC-MOTILITY drive. For upward BM motion, the force that moves the TM upward also compresses the RL-TM gap over OHCs causing inward radial flow past IHCs which is the TM-PUSH drive. Motions that produce large tilting of OHC stereocilia squeeze the supra-OHC RL-TM gap and caused inward radial flow past IHCs which is the CILIA-SLANT drive. Combinations of these drives explain: (1) the reversal at high sound levels of auditory nerve (AN) initial peak (ANIP) responses to clicks, and medial olivocochlear (MOC) inhibition of ANIP responses below, but not above, the ANIP reversal, (2) dips and phase reversals in AN responses to tones in cats and chinchillas, (3) hypersensitivity and phase reversals in tuning-curve tails after OHC ablation, and (4) MOC inhibition of tail-frequency AN responses. The OHC-MOTILITY drive provides another mechanism, in addition to BM motion amplification, that uses active processes to enhance the output of the cochlea. The ability of these IHC drives to explain previously anomalous data provides strong, although indirect, evidence that these drives are significant and presents a new view of how the cochlea works at frequencies below 3 kHz. PMID:22959529

  6. Measuring Working Memory With Digit Span and the Letter-Number Sequencing Subtests From the WAIS-IV: Too Low Manipulation Load and Risk for Underestimating Modality Effects.

    PubMed

    Egeland, Jens

    2015-01-01

    The Wechsler Adult Intelligence Scale (WAIS) is one of the most frequently used tests among psychologists. In the fourth edition of the test (WAIS-IV), the subtests Digit Span and Letter-Number Sequencing are expanded for better measurement of working memory (WM). However, it is not clear whether the new extended tasks contribute sufficient complexity to be sensitive measures of manipulation WM, nor do we know to what degree WM capacity differs between the visual and the auditory modality because the WAIS-IV only tests the auditory modality. Performance by a mixed sample of 226 patients referred for neuropsychological examination on the Digit Span and Letter-Number Sequencing subtests from the WAIS-IV and on Spatial Span from the Wechsler Memory Scale-Third Edition was analyzed in two confirmatory factor analyses to investigate whether a unitary WM model or divisions based on modality or level/complexity best fit the data. The modality model showed the best fit when analyzing summed scores for each task as well as scores for the longest span. The clinician is advised to apply tests with higher manipulation load and to consider testing visual span as well before drawing conclusions about impaired WM from the WAIS-IV.

  7. Monkeys and Humans Share a Common Computation for Face/Voice Integration

    PubMed Central

    Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.

    2011-01-01

    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576

  8. Speech Processing to Improve the Perception of Speech in Background Noise for Children With Auditory Processing Disorder and Typically Developing Peers.

    PubMed

    Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J

    2018-01-01

    Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.

  9. Congenital Amusia Persists in the Developing Brain after Daily Music Listening

    PubMed Central

    Mignault Goulet, Geneviève; Moreau, Patricia; Robitaille, Nicolas; Peretz, Isabelle

    2012-01-01

    Congenital amusia is a neurodevelopmental disorder that affects about 3% of the adult population. Adults experiencing this musical disorder in the absence of macroscopically visible brain injury are described as cases of congenital amusia under the assumption that the musical deficits have been present from birth. Here, we show that this disorder can be expressed in the developing brain. We found that (10–13 year-old) children exhibit a marked deficit in the detection of fine-grained pitch differences in both musical and acoustical context in comparison to their normally developing peers comparable in age and general intelligence. This behavioral deficit could be traced down to their abnormal P300 brain responses to the detection of subtle pitch changes. The altered pattern of electrical activity does not seem to arise from an anomalous functioning of the auditory cortex, because all early components of the brain potentials, the N100, the MMN, and the P200 appear normal. Rather, the brain and behavioral measures point to disrupted information propagation from the auditory cortex to other cortical regions. Furthermore, the behavioral and neural manifestations of the disorder remained unchanged after 4 weeks of daily musical listening. These results show that congenital amusia can be detected in childhood despite regular musical exposure and normal intellectual functioning. PMID:22606299

  10. Verbal collision avoidance messages during simulated driving: perceived urgency, alerting effectiveness and annoyance.

    PubMed

    Baldwin, Carryl L

    2011-04-01

    Matching the perceived urgency of an alert with the relative hazard level of the situation is critical for effective alarm response. Two experiments describe the impact of acoustic and semantic parameters on ratings of perceived urgency, annoyance and alerting effectiveness and on alarm response speed. Within a simulated driving context, participants rated and responded to collision avoidance system (CAS) messages spoken by a female or male voice (experiments 1 and 2, respectively). Results indicated greater perceived urgency and faster alarm response times as intensity increased from -2 dB signal to noise (S/N) ratio to +10 dB S/N, although annoyance ratings increased as well. CAS semantic content interacted with alarm intensity, indicating that at lower intensity levels participants paid more attention to the semantic content. Results indicate that both acoustic and semantic parameters independently and interactively impact CAS alert perceptions in divided attention conditions and this work can inform auditory alarm design for effective hazard matching. Matching the perceived urgency of an alert with the relative hazard level of the situation is critical for effective alarm response. Here, both acoustic and semantic parameters independently and interactively impacted CAS alert perceptions in divided attention conditions. This work can inform auditory alarm design for effective hazard matching. STATEMENT OF RELEVANCE: Results indicate that both acoustic parameters and semantic content can be used to design collision warnings with a range of urgency levels. Further, these results indicate that verbal warnings tailored to a specific hazard situation may improve hazard-matching capabilities without substantial trade-offs in perceived annoyance.

  11. An extended car-following model with consideration of speed guidance at intersections

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Li, Peng

    2016-11-01

    The main motivation of this paper is to analyze the influences of speed guidance strategies on the driving behaviors under four different traffic signalized conditions and to investigate an extended car-following model to explore how the speed guidance affects two different vehicle types that are intelligent vehicles and traditional vehicles during the phase-change periods. The numerical results show that the proposed model can qualitatively describe the impacts of the speed guidance strategies on vehicle's movement trail including the acceleration strategy, smooth braking strategy, and deceleration strategy. Moreover, the benefits of the speed guidance could be enhanced by lengthening the guiding space range, expanding permitted guiding speed range, and increasing the percentage of the intelligent vehicles.

  12. Recognition of road information using magnetic polarity for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Kim, Young-Min; Kim, Tae-Gon; Lim, Young-Cheol; Kim, Kwang-Heon; Baek, Seung-Hun; Kim, Eui-Sun

    2005-12-01

    For an intelligent vehicle driving which uses magnetic markers and magnetic sensors, it can get every kind of road information while moving the vehicle if we use the code that is encoded with N, S pole direction of makers. If there make it an only aim to move the vehicle, it becomes easy to control the vehicle the more we put markers close. By the way, to recognize the direction of a marker pole it is much better that the markers have no interference each other. To get road information and move the vehicle autonomously, the method of arranging magnetic sensors and algorithm of recognizing the position of the vehicle with those sensors was proposed. The effectiveness of the methods was verified with computer simulation.

  13. Audibility and visual biasing in speech perception

    NASA Astrophysics Data System (ADS)

    Clement, Bart Richard

    Although speech perception has been considered a predominantly auditory phenomenon, large benefits from vision in degraded acoustic conditions suggest integration of audition and vision. More direct evidence of this comes from studies of audiovisual disparity that demonstrate vision can bias and even dominate perception (McGurk & MacDonald, 1976). It has been observed that hearing-impaired listeners demonstrate more visual biasing than normally hearing listeners (Walden et al., 1990). It is argued here that stimulus audibility must be equated across groups before true differences can be established. In the present investigation, effects of visual biasing on perception were examined as audibility was degraded for 12 young normally hearing listeners. Biasing was determined by quantifying the degree to which listener identification functions for a single synthetic auditory /ba-da-ga/ continuum changed across two conditions: (1)an auditory-only listening condition; and (2)an auditory-visual condition in which every item of the continuum was synchronized with visual articulations of the consonant-vowel (CV) tokens /ba/ and /ga/, as spoken by each of two talkers. Audibility was altered by presenting the conditions in quiet and in noise at each of three signal-to- noise (S/N) ratios. For the visual-/ba/ context, large effects of audibility were found. As audibility decreased, visual biasing increased. A large talker effect also was found, with one talker eliciting more biasing than the other. An independent lipreading measure demonstrated that this talker was more visually intelligible than the other. For the visual-/ga/ context, audibility and talker effects were less robust, possibly obscured by strong listener effects, which were characterized by marked differences in perceptual processing patterns among participants. Some demonstrated substantial biasing whereas others demonstrated little, indicating a strong reliance on audition even in severely degraded acoustic conditions. Listener effects were not correlated with lipreading performance. The large effect of audibility suggests that conclusions regarding an increased reliance on vision among hearing- impaired listeners were premature, and that accurate comparisons only can be made after equating audibility. Further, if after such control, individual hearing- impaired listeners demonstrate the processing differences that were demonstrated in the present investigation, then these findings have the potential to impact aural rehabilitation strategies.

  14. Intelligibility of degraded speech and the relationship between symptoms of inattention, hyperactivity/impulsivity and language impairment in children with suspected auditory processing disorder.

    PubMed

    Ahmmed, Ansar Uddin

    2017-10-01

    To compare the sensitivity and specificity of Auditory Figure Ground sub-tests of the SCAN-3 battery, using signal to noise ratio (SNR) of +8 dB (AFG+8) and 0 dB (AFG0), in identifying auditory processing disorder (APD). A secondary objective was to evaluate any difference in auditory processing (AP) between children with symptoms of inattention versus combined sub-types of Attention Deficit Hyperactivity Disorder (ADHD). Data from 201 children, aged 6 to 16 years (mean: 10 years 6 months, SD: 2 years 8 months), who were assessed for suspected APD were reviewed retrospectively. The outcomes of the SCAN-3 APD test battery, Swanson Nolan and Pelham-IV parental rating (SNAP-IV) and Children's Communication Checklist-2 (CCC-2) were analysed. AFG0 had a sensitivity of 56.3% and specificity of 100% in identifying children performing poorly in at least two of six SCAN-3 sub-tests or one of the two questionnaires, in contrast to 42.1% and 80% respectively for AFG+8. Impaired AP was mostly associated with symptoms of ADHD and /or language impairment (LI). LI was present in 92.9% of children with ADHD symptoms. Children with symptoms of combined ADHD plus LI performed significantly poorly (p < 0.05) compared to inattention ADHD plus LI in Filtered Words (FW) sub-test, but not in the rest of the SCAN-3 sub-tests. Speech in noise tests using SNR of 0 dB is better than +8 dB in assessing APD. The better FW performance of the inattention ADHD plus LI group can be speculated to be related to known difference in activity in a neural network between different sub-types of ADHD. The findings of the study and existing literature suggest that neural networks connecting the cerebral hemispheres, basal ganglia and cerebellum are involved in APD, ADHD and LI. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Investigating the influence of working memory capacity when driving behavior is combined with cognitive load: An LCT study of young novice drivers.

    PubMed

    Ross, Veerle; Jongen, Ellen M M; Wang, Weixin; Brijs, Tom; Brijs, Kris; Ruiter, Robert A C; Wets, Geert

    2014-01-01

    Distracted driving has received increasing attention in the literature due to potential adverse safety outcomes. An often posed solution to alleviate distraction while driving is hands-free technology. Interference by distraction can occur however at the sensory input (e.g., visual) level, but also at the cognitive level where hands-free technology induces working memory (WM) load. Active maintenance of goal-directed behavior in the presence of distraction depends on WM capacity (i.e., Lavie's Load theory) which implies that people with higher WM capacity are less susceptible to distractor interference. This study investigated the interaction between verbal WM load and WM capacity on driving performance to determine whether individuals with higher WM capacity were less affected by verbal WM load, leading to a smaller deterioration of driving performance. Driving performance of 46 young novice drivers (17-25 years-old) was measured with the lane change task (LCT). Participants drove without and with verbal WM load of increasing complexity (auditory-verbal response N-back task). Both visuospatial and verbal WM capacity were investigated. Dependent measures were mean deviation in the lane change path (MDEV), lane change initiation (LCI) and percentage of correct lane changes (PCL). Driving experience was included as a covariate. Performance on each dependent measure deteriorated with increasing verbal WM load. Meanwhile, higher WM capacity related to better LCT performance. Finally, for LCI and PCL, participants with higher verbal WM capacity were influenced less by verbal WM load. These findings entail that completely eliminating distraction is necessary to minimize crash risks among young novice drivers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Older Adult Multitasking Performance Using a Gaze-Contingent Useful Field of View.

    PubMed

    Ward, Nathan; Gaspar, John G; Neider, Mark B; Crowell, James; Carbonari, Ronald; Kaczmarski, Hank; Ringer, Ryan V; Johnson, Aaron P; Loschky, Lester C; Kramer, Arthur F

    2018-03-01

    Objective We implemented a gaze-contingent useful field of view paradigm to examine older adult multitasking performance in a simulated driving environment. Background Multitasking refers to the ability to manage multiple simultaneous streams of information. Recent work suggests that multitasking declines with age, yet the mechanisms supporting these declines are still debated. One possible framework to better understand this phenomenon is the useful field of view, or the area in the visual field where information can be attended and processed. In particular, the useful field of view allows for the discrimination of two competing theories of real-time multitasking, a general interference account and a tunneling account. Methods Twenty-five older adult subjects completed a useful field of view task that involved discriminating the orientation of lines in gaze-contingent Gabor patches appearing at varying eccentricities (based on distance from the fovea) as they operated a vehicle in a driving simulator. In half of the driving scenarios, subjects also completed an auditory two-back task to manipulate cognitive workload, and during some trials, wind was introduced as a means to alter general driving difficulty. Results Consistent with prior work, indices of driving performance were sensitive to both wind and workload. Interestingly, we also observed a decline in Gabor patch discrimination accuracy under high cognitive workload regardless of eccentricity, which provides support for a general interference account of multitasking. Conclusion The results showed that our gaze-contingent useful field of view paradigm was able to successfully examine older adult multitasking performance in a simulated driving environment. Application This study represents the first attempt to successfully measure dynamic changes in the useful field of view for older adults completing a multitasking scenario involving driving.

  17. Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening.

    PubMed

    Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M; Best, Virginia; Roverud, Elin; Kidd, Gerald

    2016-08-03

    While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called "cocktail party" problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intelligibility, an effect known as spatial release from masking (SRM). This study assessed the contributions of two stimulus properties that vary with separation of sound sources, binaural envelope (ENV) and temporal fine structure (TFS), to SRM in normal-hearing (NH) human listeners. Target speech was presented from the front and speech maskers were either colocated with or symmetrically separated from the target in azimuth. The target and maskers were presented either as natural speech or as "noise-vocoded" speech in which the intelligibility was conveyed only by the speech ENVs from several frequency bands; the speech TFS within each band was replaced with noise carriers. The experiments were designed to preserve the spatial cues in the speech ENVs while retaining/eliminating them from the TFS. This was achieved by using the same/different noise carriers in the two ears. A phenomenological auditory-nerve model was used to verify that the interaural correlations in TFS differed across conditions, whereas the ENVs retained a high degree of correlation, as intended. Overall, the results from this study revealed that binaural TFS cues, especially for frequency regions below 1500 Hz, are critical for achieving SRM in NH listeners. Potential implications for studying SRM in hearing-impaired listeners are discussed. Acoustic signals received by the auditory system pass first through an array of physiologically based band-pass filters. Conceptually, at the output of each filter, there are two principal forms of temporal information: slowly varying fluctuations in the envelope (ENV) and rapidly varying fluctuations in the temporal fine structure (TFS). The importance of these two types of information in everyday listening (e.g., conversing in a noisy social situation; the "cocktail-party" problem) has not been established. This study assessed the contributions of binaural ENV and TFS cues for understanding speech in multiple-talker situations. Results suggest that, whereas the ENV cues are important for speech intelligibility, binaural TFS cues are critical for perceptually segregating the different talkers and thus for solving the cocktail party problem. Copyright © 2016 the authors 0270-6474/16/368250-08$15.00/0.

  18. Local navigation and fuzzy control realization for autonomous guided vehicle

    NASA Astrophysics Data System (ADS)

    El-Konyaly, El-Sayed H.; Saraya, Sabry F.; Shehata, Raef S.

    1996-10-01

    This paper addresses the problem of local navigation for an autonomous guided vehicle (AGV) in a structured environment that contains static and dynamic obstacles. Information about the environment is obtained via a CCD camera. The problem is formulated as a dynamic feedback control problem in which speed and steering decisions are made on the fly while the AGV is moving. A decision element (DE) that uses local information is proposed. The DE guides the vehicle in the environment by producing appropriate navigation decisions. Dynamic models of a three-wheeled vehicle for driving and steering mechanisms are derived. The interaction between them is performed via the local feedback DE. A controller, based on fuzzy logic, is designed to drive the vehicle safely in an intelligent and human-like manner. The effectiveness of the navigation and control strategies in driving the AGV is illustrated and evaluated.

  19. NEPC Review: "New York Charter Schools Outperform Traditional Selective Public Schools--More Evidence That Cream-Skimming Is Not Driving Charters' Success"

    ERIC Educational Resources Information Center

    Cordes, Sarah A.

    2017-01-01

    A common argument leveled against charter schools is that they attract the most motivated and intelligent students from already struggling public schools. Marcus Winters seeks to examine this claim, known as "cream-skimming," by comparing the performance of New York City's (NYC) charter middle schools with a set of traditional selective…

  20. Bassoon-disruption slows vesicle replenishment and induces homeostatic plasticity at a CNS synapse

    PubMed Central

    Mendoza Schulz, Alejandro; Jing, Zhizi; María Sánchez Caro, Juan; Wetzel, Friederike; Dresbach, Thomas; Strenzke, Nicola; Wichmann, Carolin; Moser, Tobias

    2014-01-01

    Endbulb of Held terminals of auditory nerve fibers (ANF) transmit auditory information at hundreds per second to bushy cells (BCs) in the anteroventral cochlear nucleus (AVCN). Here, we studied the structure and function of endbulb synapses in mice that lack the presynaptic scaffold bassoon and exhibit reduced ANF input into the AVCN. Endbulb terminals and active zones were normal in number and vesicle complement. Postsynaptic densities, quantal size and vesicular release probability were increased while vesicle replenishment and the standing pool of readily releasable vesicles were reduced. These opposing effects canceled each other out for the first evoked EPSC, which showed unaltered amplitude. We propose that ANF activity deprivation drives homeostatic plasticity in the AVCN involving synaptic upscaling and increased intrinsic BC excitability. In vivo recordings from individual mutant BCs demonstrated a slightly improved response at sound onset compared to ANF, likely reflecting the combined effects of ANF convergence and homeostatic plasticity. Further, we conclude that bassoon promotes vesicular replenishment and, consequently, a large standing pool of readily releasable synaptic vesicles at the endbulb synapse. PMID:24442636

Top