Science.gov

Sample records for audio machine text-to-speech

  1. Audio-Visual Teaching Machines.

    ERIC Educational Resources Information Center

    Dorsett, Loyd G.

    An audiovisual teaching machine (AVTM) presents programed audio and visual material simultaneously to a student and accepts his response. If his response is correct, the machine proceeds with the lesson; if it is incorrect, the machine so indicates and permits another choice (linear) or automatically presents supplementary material (branching).…

  2. Evaluating Text-to-Speech Synthesizers

    ERIC Educational Resources Information Center

    Cardoso, Walcir; Smith, George; Fuentes, Cesar Garcia

    2015-01-01

    Text-To-Speech (TTS) synthesizers have piqued the interest of researchers for their potential to enhance the L2 acquisition of writing (Kirstein, 2006), vocabulary and reading (Proctor, Dalton, & Grisham, 2007) and pronunciation (Cardoso, Collins, & White, 2012; Soler-Urzua, 2011). Despite their proven effectiveness, there is a need for…

  3. Building a Prototype Text to Speech for Sanskrit

    NASA Astrophysics Data System (ADS)

    Mahananda, Baiju; Raju, C. M. S.; Patil, Ramalinga Reddy; Jha, Narayana; Varakhedi, Shrinivasa; Kishore, Prahallad

    This paper describes about the work done in building a prototype text to speech system for Sanskrit. A basic prototype text-to-speech is built using a simplified Sanskrit phone set, and employing a unit selection technique, where prerecorded sub-word units are concatenated to synthesize a sentence. We also discuss the issues involved in building a full-fledged text-to-speech for Sanskrit.

  4. Choosing and Using Text-to-Speech Software

    ERIC Educational Resources Information Center

    Peters, Tom; Bell, Lori

    2007-01-01

    This article describes a computer-based technology for generating speech called text-to-speech (TTS). This software is ready for widespread use by libraries, other organizations, and individual users. It offers the affordable ability to turn just about any electronic text that is not image-based into an artificially spoken communication. The…

  5. Choosing and Using Text-to-Speech Software

    ERIC Educational Resources Information Center

    Peters, Tom; Bell, Lori

    2007-01-01

    This article describes a computer-based technology for generating speech called text-to-speech (TTS). This software is ready for widespread use by libraries, other organizations, and individual users. It offers the affordable ability to turn just about any electronic text that is not image-based into an artificially spoken communication. The…

  6. The Study and Implementation of Text-to-Speech System for Agricultural Information

    NASA Astrophysics Data System (ADS)

    Zheng, Huoguo; Hu, Haiyan; Liu, Shihong; Meng, Hong

    The Broadcast and Television coverage has increased to more than 98% in china. Information services by radio have wide coverage, low cost, easy-to-grass-roots farmers to accept etc. characteristics. In order to play the better role of broadcast information service, as well as aim at the problem of lack of information resource in rural, we R & D the text-to-speech system. The system includes two parts, software and hardware device, both of them can translate text into audio file. The software subsystem was implemented basic on third-part middleware, and the hardware subsystem was realized with microelectronics technology. Results indicate that the hardware is better than software. The system has been applied in huailai city hebei province, which has conversed more than 8000 audio files as programming materials for the local radio station.

  7. "Look What I Did!": Student Conferences with Text-to-Speech Software

    ERIC Educational Resources Information Center

    Young, Chase; Stover, Katie

    2014-01-01

    The authors describe a strategy that empowers students to edit and revise their own writing. Students input their writing in to text-to-speech software that rereads the text aloud. While listening, students make necessary revisions and edits.

  8. The Effects of Word Prediction and Text-to-Speech on the Writing Process of Translating

    ERIC Educational Resources Information Center

    Cunningham, Robert

    2013-01-01

    The purpose of this study was to determine the effects of the combination of word prediction and text-to-speech software on the writing process of translating. Participants for this study included 10 elementary and middle school students who had a diagnosis of disorder of written expression. A modified multiple case series was used to collect data…

  9. Orthographic Learning and the Role of Text-to-Speech Software in Dutch Disabled Readers

    ERIC Educational Resources Information Center

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    In this study, we examined whether orthographic learning can be demonstrated in disabled readers learning to read in a transparent orthography (Dutch). In addition, we tested the effect of the use of text-to-speech software, a new form of direct instruction, on orthographic learning. Both research goals were investigated by replicating…

  10. Using Text-to-Speech Reading Support for an Adult with Mild Aphasia and Cognitive Impairment

    ERIC Educational Resources Information Center

    Harvey, Judy; Hux, Karen; Snell, Jeffry

    2013-01-01

    This single case study served to examine text-to-speech (TTS) effects on reading rate and comprehension in an individual with mild aphasia and cognitive impairment. Findings showed faster reading, given TTS presented at a normal speaking rate, but no significant comprehension changes. TTS may support reading in people with aphasia when time…

  11. Using Text-to-Speech Reading Support for an Adult with Mild Aphasia and Cognitive Impairment

    ERIC Educational Resources Information Center

    Harvey, Judy; Hux, Karen; Snell, Jeffry

    2013-01-01

    This single case study served to examine text-to-speech (TTS) effects on reading rate and comprehension in an individual with mild aphasia and cognitive impairment. Findings showed faster reading, given TTS presented at a normal speaking rate, but no significant comprehension changes. TTS may support reading in people with aphasia when time…

  12. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    PubMed Central

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  13. A Case Study: Design Factors for Instruction with an Audio-Visual Response Teaching Machine

    ERIC Educational Resources Information Center

    Foote, B. L.

    1973-01-01

    Discusses experience of using Dorsett M-86 audio-visual response teaching machines to teach basic statistics courses. Recommends further research in the design of the optimal mix of various teaching methods for a block of instruction. (CC)

  14. Advancements in text-to-speech technology and implications for AAC applications

    NASA Astrophysics Data System (ADS)

    Syrdal, Ann K.

    2003-10-01

    Intelligibility was the initial focus in text-to-speech (TTS) research, since it is clearly a necessary condition for the application of the technology. Sufficiently high intelligibility (approximating human speech) has been achieved in the last decade by the better formant-based and concatenative TTS systems. This led to commercially available TTS systems for highly motivated users, particularly the blind and vocally impaired. Some unnatural qualities of TTS were exploited by these users, such as very fast speaking rates and altered pitch ranges for flagging relevant information. Recently, the focus in TTS research has turned to improving naturalness, so that synthetic speech sounds more human and less robotic. Unit selection approaches to concatenative synthesis have dramatically improved TTS quality, although at the cost of larger and more complex systems. This advancement in naturalness has made TTS technology more acceptable to the general public. The vocally impaired appreciate a more natural voice with which to represent themselves when communicating with others. Unit selection TTS does not achieve such high speaking rates as the earlier TTS systems, however, which is a disadvantage to some AAC device users. An important new research emphasis is to improve and increase the range of emotional expressiveness of TTS.

  15. A Variable Break Prediction Method Using CART in a Japanese Text-to-Speech System

    NASA Astrophysics Data System (ADS)

    Na, Deok-Su; Bae, Myung-Jin

    Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries. However, an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally, unit-selection is conducted using multiple prosodic targets. The experimental results show that the proposed method improves the naturalness of synthesized speech.

  16. Supporting Reading Comprehension of At-Risk Pre-Adolescent Readers through the Use of Text-to-Speech Technology Paired with Strategic Instruction

    ERIC Educational Resources Information Center

    Anderson, Susan D.

    2009-01-01

    This research highlighted the use of text-to-speech technology and current shifts in strategy-based reading instruction in order to address the comprehension needs of struggling pre-adolescent readers. The following questions were posed: (a) Does reading comprehension of preadolescent struggling readers improve as the direct result of using…

  17. The Effects of Word Prediction and Text-to-Speech Technologies on the Narrative Writing Skills of Hispanic Students with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Silio, Monica C.; Barbetta, Patricia M.

    2010-01-01

    A multiple-baseline design across subjects was used to investigate the effects of word prediction and text-to-speech alone and in combination on four narrative composition-writing skills (writing fluency, syntax, spelling accuracy, and overall organization) of six fifth-grade Hispanic boys with specific learning disabilities (SLD). Participants…

  18. Using TTS Voices to Develop Audio Materials for Listening Comprehension: A Digital Approach

    ERIC Educational Resources Information Center

    Sha, Guoquan

    2010-01-01

    This paper reports a series of experiments with text-to-speech (TTS) voices. These experiments have been conducted to develop audio materials for listening comprehension as an alternative technology to traditionally used audio equipment like the compact cassette. The new generation of TTS voices based on unit selection synthesis provides…

  19. Using TTS Voices to Develop Audio Materials for Listening Comprehension: A Digital Approach

    ERIC Educational Resources Information Center

    Sha, Guoquan

    2010-01-01

    This paper reports a series of experiments with text-to-speech (TTS) voices. These experiments have been conducted to develop audio materials for listening comprehension as an alternative technology to traditionally used audio equipment like the compact cassette. The new generation of TTS voices based on unit selection synthesis provides…

  20. Audio 2008: Audio Fixation

    ERIC Educational Resources Information Center

    Kaye, Alan L.

    2008-01-01

    Take a look around the bus or subway and see just how many people are bumping along to an iPod or an MP3 player. What they are listening to is their secret, but the many signature earbuds in sight should give one a real sense of just how pervasive digital audio has become. This article describes how that popularity is mirrored in library audio

  1. Audio 2008: Audio Fixation

    ERIC Educational Resources Information Center

    Kaye, Alan L.

    2008-01-01

    Take a look around the bus or subway and see just how many people are bumping along to an iPod or an MP3 player. What they are listening to is their secret, but the many signature earbuds in sight should give one a real sense of just how pervasive digital audio has become. This article describes how that popularity is mirrored in library audio…

  2. Targeted Audio

    NASA Astrophysics Data System (ADS)

    Olszewski, Dirk

    Targeted audio aims at creating personal listening zones by utilizing adequate measurements. A person inside this listening zone shall be able to perceive acoustically submitted information without disturbing other persons outside the desired listening zone. In order to fulfill this demand, the use of a highly directional audible sound beam is favored. The sound beam shall be aimed at the respective listening zone target, thus implicating the expression targeted audio.

  3. Audio Restoration

    NASA Astrophysics Data System (ADS)

    Esquef, Paulo A. A.

    The first reproducible recording of human voice was made in 1877 on a tinfoil cylinder phonograph devised by Thomas A. Edison. Since then, much effort has been expended to find better ways to record and reproduce sounds. By the mid-1920s, the first electrical recordings appeared and gradually took over purely acoustic recordings. The development of electronic computers, in conjunction with the ability to record data onto magnetic or optical media, culminated in the standardization of compact disc format in 1980. Nowadays, digital technology is applied to several audio applications, not only to improve the quality of modern and old recording/reproduction techniques, but also to trade off sound quality for less storage space and less taxing transmission capacity requirements.

  4. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  5. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  6. Perceptually Based Audio Coding

    NASA Astrophysics Data System (ADS)

    Houtsma, Adrianus J. M.

    High-quality audio is a concept that is not exactly defined and not always properly understood. To some, it refers directly to the physical similarity between a real sound field and its electroacoustical reproduction. In this viewpoint, acoustical knowledge and electronic technology are the only limiting factors preventing audio quality from being perfect. To others, however, audio quality refers to the audible similarity between a real life sound event and an electronic reproduction. Given this viewpoint, the human auditory system with all its limitations becomes an essential factor determining audio quality.

  7. Audio signal management techniques

    NASA Astrophysics Data System (ADS)

    Anderson, A. P.; Lane, J. K.; Pudliner, B. K.

    1983-02-01

    The objective of the Audio Signal Management technical program was to design and develop an Exploratory Development Model Audio Signal Management System (ASMS). This system is to be used to test and evaluate present and future voice data entry algorithms, processing techniques, and hardware modules. The ASMS consists of internal functions implemented on the RADC PDP 11/70 computer, external functions implemented in stand-alone hardware devices, an Audio Distribution Network (ADN) for shaping and routing audio signals, and an ADP Data entry communication interface/keyboard translator with HP 2645A terminal for function control and transcription.

  8. Audio signal processor

    NASA Technical Reports Server (NTRS)

    Hymer, R. L.

    1970-01-01

    System provides automatic volume control for an audio amplifier or a voice communication system without introducing noise surges during pauses in the input, and without losing the initial signal when the input resumes.

  9. Real World Audio

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Crystal River Engineering was originally featured in Spinoff 1992 with the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. The Convolvotron was developed for Ames' research on virtual acoustic displays. Crystal River is a now a subsidiary of Aureal Semiconductor, Inc. and they together develop and market the technology, which is a 3-D (three dimensional) audio technology known commercially today as Aureal 3D (A-3D). The technology has been incorporated into video games, surround sound systems, and sound cards.

  10. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  11. Perceptual Audio Hashing Functions

    NASA Astrophysics Data System (ADS)

    Özer, Hamza; Sankur, Bülent; Memon, Nasir; Anarım, Emin

    2005-12-01

    Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  12. Audio Feedback -- Better Feedback?

    ERIC Educational Resources Information Center

    Voelkel, Susanne; Mello, Luciane V.

    2014-01-01

    National Student Survey (NSS) results show that many students are dissatisfied with the amount and quality of feedback they get for their work. This study reports on two case studies in which we tried to address these issues by introducing audio feedback to one undergraduate (UG) and one postgraduate (PG) class, respectively. In case study one…

  13. Audio Feedback -- Better Feedback?

    ERIC Educational Resources Information Center

    Voelkel, Susanne; Mello, Luciane V.

    2014-01-01

    National Student Survey (NSS) results show that many students are dissatisfied with the amount and quality of feedback they get for their work. This study reports on two case studies in which we tried to address these issues by introducing audio feedback to one undergraduate (UG) and one postgraduate (PG) class, respectively. In case study one…

  14. Efficient audio signal processing for embedded systems

    NASA Astrophysics Data System (ADS)

    Chiu, Leung Kin

    As mobile platforms continue to pack on more computational power, electronics manufacturers start to differentiate their products by enhancing the audio features. However, consumers also demand smaller devices that could operate for longer time, hence imposing design constraints. In this research, we investigate two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound ”richer" and "fuller." Piezoelectric speakers have a small form factor but exhibit poor response in the low-frequency region. In the algorithm, we combine psychoacoustic bass extension and dynamic range compression to improve the perceived bass coming out from the tiny speakers. We also developed an audio energy reduction algorithm for loudspeaker power management. The perceptually transparent algorithm extends the battery life of mobile devices and prevents thermal damage in speakers. This method is similar to audio compression algorithms, which encode audio signals in such a ways that the compression artifacts are not easily perceivable. Instead of reducing the storage space, however, we suppress the audio contents that are below the hearing threshold, therefore reducing the signal energy. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The system is an example of an analog-to-information converter. The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. In this classifier architecture, we combine simple "base" analog classifiers to form a strong one. We also designed the circuits to implement the AdaBoost-based analog classifier.

  15. Audio distribution and Monitoring Circuit

    NASA Technical Reports Server (NTRS)

    Kirkland, J. M.

    1983-01-01

    Versatile circuit accepts and distributes TV audio signals. Three-meter audio distribution and monitoring circuit provides flexibility in monitoring, mixing, and distributing audio inputs and outputs at various signal and impedance levels. Program material is simultaneously monitored on three channels, or single-channel version built to monitor transmitted or received signal levels, drive speakers, interface to building communications, and drive long-line circuits.

  16. Hiding Data in Audio Signal

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Dutta, Poulami; Balitanas, Maricel O.; Kim, Tai-Hoon; Das, Purnendu

    This paper describes the LSB technique for secure data transfer. Secret information can be hidden inside all sorts of cover information: text, images, audio, video and more. Embedding secret messages in digital sound is usually a more difficult process. Varieties of techniques for embedding information in digital audio have been established. These are parity coding, phase coding, spread spectrum, echo hiding, LSB. Least significant bits (LSB) insertion is one of the simplest approaches to embedding information in audio file.

  17. The Lowdown on Audio Downloads

    ERIC Educational Resources Information Center

    Farrell, Beth

    2010-01-01

    First offered to public libraries in 2004, downloadable audiobooks have grown by leaps and bounds. According to the Audio Publishers Association, their sales today account for 21% of the spoken-word audio market. It hasn't been easy, however. WMA. DRM. MP3. AAC. File extensions small on letters but very big on consequences for librarians,…

  18. The Lowdown on Audio Downloads

    ERIC Educational Resources Information Center

    Farrell, Beth

    2010-01-01

    First offered to public libraries in 2004, downloadable audiobooks have grown by leaps and bounds. According to the Audio Publishers Association, their sales today account for 21% of the spoken-word audio market. It hasn't been easy, however. WMA. DRM. MP3. AAC. File extensions small on letters but very big on consequences for librarians,…

  19. Engaging Students with Audio Feedback

    ERIC Educational Resources Information Center

    Cann, Alan

    2014-01-01

    Students express widespread dissatisfaction with academic feedback. Teaching staff perceive a frequent lack of student engagement with written feedback, much of which goes uncollected or unread. Published evidence shows that audio feedback is highly acceptable to students but is underused. This paper explores methods to produce and deliver audio

  20. Metrological digital audio reconstruction

    DOEpatents

    Fadeyev; Vitaliy , Haber; Carl

    2004-02-19

    Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with little or no contact, by measuring the groove shape using precision metrology methods coupled with digital image processing and numerical analysis. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Two examples used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record and a commercial confocal scanning probe to study a 1920's celluloid Edison cylinder. Comparisons are presented with stylus playback of the samples and with a digitally re-mastered version of an original magnetic recording. There is also a more extensive implementation of this approach, with dedicated hardware and software.

  1. A centralized audio presentation manager

    SciTech Connect

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in the most perceptible manner through the use of a theoretically and empirically designed rule set.

  2. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The...

  3. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The...

  4. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The...

  5. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The...

  6. Advances in audio source seperation and multisource audio content retrieval

    NASA Astrophysics Data System (ADS)

    Vincent, Emmanuel

    2012-06-01

    Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.

  7. The Timbre Toolbox: extracting audio descriptors from musical signals.

    PubMed

    Peeters, Geoffroy; Giordano, Bruno L; Susini, Patrick; Misdariis, Nicolas; McAdams, Stephen

    2011-11-01

    The analysis of musical signals to extract audio descriptors that can potentially characterize their timbre has been disparate and often too focused on a particular small set of sounds. The Timbre Toolbox provides a comprehensive set of descriptors that can be useful in perceptual research, as well as in music information retrieval and machine-learning approaches to content-based retrieval in large sound databases. Sound events are first analyzed in terms of various input representations (short-term Fourier transform, harmonic sinusoidal components, an auditory model based on the equivalent rectangular bandwidth concept, the energy envelope). A large number of audio descriptors are then derived from each of these representations to capture temporal, spectral, spectrotemporal, and energetic properties of the sound events. Some descriptors are global, providing a single value for the whole sound event, whereas others are time-varying. Robust descriptive statistics are used to characterize the time-varying descriptors. To examine the information redundancy across audio descriptors, correlational analysis followed by hierarchical clustering is performed. This analysis suggests ten classes of relatively independent audio descriptors, showing that the Timbre Toolbox is a multidimensional instrument for the measurement of the acoustical structure of complex sound signals. PMID:22087919

  8. Digital Audio Application to Short Wave Broadcasting

    NASA Technical Reports Server (NTRS)

    Chen, Edward Y.

    1997-01-01

    Digital audio is becoming prevalent not only in consumer electornics, but also in different broadcasting media. Terrestrial analog audio broadcasting in the AM and FM bands will be eventually be replaced by digital systems.

  9. QRDA: Quantum Representation of Digital Audio

    NASA Astrophysics Data System (ADS)

    Wang, Jian

    2016-03-01

    Multimedia refers to content that uses a combination of different content forms. It includes two main medias: image and audio. However, by contrast with the rapid development of quantum image processing, quantum audio almost never been studied. In order to change this status, a quantum representation of digital audio (QRDA) is proposed in this paper to present quantum audio. QRDA uses two entangled qubit sequences to store the audio amplitude and time information. The two qubit sequences are both in basis state: |0> and |1>. The QRDA audio preparation from initial state |0> is given to store an audio in quantum computers. Then some exemplary quantum audio processing operations are performed to indicate QRDA's usability.

  10. QRDA: Quantum Representation of Digital Audio

    NASA Astrophysics Data System (ADS)

    Wang, Jian

    2015-09-01

    Multimedia refers to content that uses a combination of different content forms. It includes two main medias: image and audio. However, by contrast with the rapid development of quantum image processing, quantum audio almost never been studied. In order to change this status, a quantum representation of digital audio (QRDA) is proposed in this paper to present quantum audio. QRDA uses two entangled qubit sequences to store the audio amplitude and time information. The two qubit sequences are both in basis state: |0> and |1>. The QRDA audio preparation from initial state |0> is given to store an audio in quantum computers. Then some exemplary quantum audio processing operations are performed to indicate QRDA's usability.

  11. Engaging Students with Audio Feedback

    ERIC Educational Resources Information Center

    Cann, Alan

    2014-01-01

    Students express widespread dissatisfaction with academic feedback. Teaching staff perceive a frequent lack of student engagement with written feedback, much of which goes uncollected or unread. Published evidence shows that audio feedback is highly acceptable to students but is underused. This paper explores methods to produce and deliver audio…

  12. Audio/ Videoconferencing Packages: Low Cost

    ERIC Educational Resources Information Center

    Treblay, Remy; Fyvie, Barb; Koritko, Brenda

    2005-01-01

    A comparison was conducted of "Voxwire MeetingRoom" and "iVocalize" v4.1.0.3, both Web-conferencing products using voice-over-Internet protocol (VoIP) to provide unlimited, inexpensive, international audio communication, and high-quality Web-conferencing fostering collaborative learning. The study used the evaluation criteria used in earlier…

  13. The Audio-Visual Man.

    ERIC Educational Resources Information Center

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  14. Radioactive Decay: Audio Data Collection

    ERIC Educational Resources Information Center

    Struthers, Allan

    2009-01-01

    Many phenomena generate interesting audible time series. This data can be collected and processed using audio software. The free software package "Audacity" is used to demonstrate the process by recording, processing, and extracting click times from an inexpensive radiation detector. The high quality of the data is demonstrated with a simple…

  15. A Simple Audio Conductivity Device.

    ERIC Educational Resources Information Center

    Berenato, Gregory; Maynard, David F.

    1997-01-01

    Describes a simple audio conductivity device built to address the problem of the lack of sensitivity needed to measure small differences in conductivity in crude conductivity devices. Uses a 9-V battery as a power supply and allows the relative resistance differences between substances to be detected by the frequency of its audible tones. Presents…

  16. Audio-Visual Materials Catalog.

    ERIC Educational Resources Information Center

    Anderson (M.D.) Hospital and Tumor Inst., Houston, TX.

    This catalog lists 27 audiovisual programs produced by the Department of Medical Communications of the University of Texas M. D. Anderson Hospital and Tumor Institute for public distribution. Video tapes, 16 mm. motion pictures and slide/audio series are presented dealing mostly with cancer and related subjects. The programs are intended for…

  17. The Audio-Visual Man.

    ERIC Educational Resources Information Center

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  18. Radioactive Decay: Audio Data Collection

    ERIC Educational Resources Information Center

    Struthers, Allan

    2009-01-01

    Many phenomena generate interesting audible time series. This data can be collected and processed using audio software. The free software package "Audacity" is used to demonstrate the process by recording, processing, and extracting click times from an inexpensive radiation detector. The high quality of the data is demonstrated with a simple…

  19. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback...

  20. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in...

  1. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio...

  2. Audio-Visual Aids: Historians in Blunderland.

    ERIC Educational Resources Information Center

    Decarie, Graeme

    1988-01-01

    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  3. Audio-visual Materials and Rural Libraries

    ERIC Educational Resources Information Center

    Escolar-Sobrino, Hipolito

    1972-01-01

    Audio-visual materials enlarge the educational work being done in the classroom and the library. This article examines the various types of audio-visual material and equipment and suggests ways in which audio-visual media can be used economically and efficiently in rural libraries. (Author)

  4. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  5. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Koci?ski, J?drzej; Hafke-Dys, Honorata; Wrzosek, Ma?gorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Pozna?. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. PMID:25863510

  6. Aeronautical audio broadcasting via satellite

    NASA Technical Reports Server (NTRS)

    Tzeng, Forrest F.

    1993-01-01

    A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.

  7. Audio indexing using speaker identification

    NASA Astrophysics Data System (ADS)

    Wilcox, Lynn D.; Kimber, Don; Chen, Francine R.

    1994-10-01

    In this paper, a technique for audio indexing based on speaker identification is proposed. When speakers are known a priori, a speaker index can be created in real time using the Viterbi algorithm to segment the audio into intervals from a single talker. Segmentation is performed using a hidden Markov model network consisting of interconnected speaker sub- networks. Speaker training data is used to initiate sub-networks for each speaker. Sub- networks can also be used to model silence, or non-speech sounds such as musical theme. When no prior knowledge of the speakers is available, unsupervised segmentation is performed using a non-real time iterative algorithm. The speaker sub-networks are first initialized, and segmentation is performed by iteratively generating a segmentation using the Viterbi algorithm, and retraining the sub-networks based on the results of the segmentation. Since the accuracy of the speaker segmentation depends on how well the speaker sub-networks are initiated, agglomerative clustering is used to approximately segment the audio according to speaker for initialization of the speaker sub-networks. The distance measure for the agglomerative clustering is a likelihood ratio in which speed segments are characterized by Gaussian distributions. The distance between merged segments is recomputed at each stage of the clustering, and a duration model is used to bias the likelihood ratio. Segmentation accuracy using agglomerative clustering initialization matches accuracy using initialization with speaker labeled data.

  8. Preparation of sound base for a text-to-speech synthesis system

    NASA Astrophysics Data System (ADS)

    Degtyarev, Vladimir M.; Gusev, Mikhail N.

    2005-04-01

    We are giving several recommendations for the choice of parameters of the sound fragments in this report. The sound fragments are components of the sound base, used in Russian speech synthesis system by a text. It isn't the secret that quality of concatenation synthesis in many respects is defined at the stage of a speaker choice and preparation of base of speaker's voice samples. Formulated recommendations are received on the basis of the statistic analysis of big amount of various types of texts and concern both separate sound fragments and their groups. Parameters of sounds were taken with the help of the automatic linguistic processor including phonetic and prosodic transcriptors. The duration, intensity and main pitch frequency of sounds in various contexts and intonational contours were analyzed. The sound base produced according to the worked out recommendations, allows to make better intelligibility and naturalness of synthetic speech due to minimization of changes of speaker's voice samples.

  9. Text to Speech: A 4-H Model of Accessibility and Inclusion

    ERIC Educational Resources Information Center

    Green, Jeremy W.

    2012-01-01

    4-H project manuals play an integral part in a youth's ability to achieve mastery in a specific project area. For youth who struggle with reading, written 4-H materials prove inadequate in addressing the needs of the learner. This article proposes a new delivery method of 4-H educational material designed to create a more inclusive and…

  10. Text to Speech: A 4-H Model of Accessibility and Inclusion

    ERIC Educational Resources Information Center

    Green, Jeremy W.

    2012-01-01

    4-H project manuals play an integral part in a youth's ability to achieve mastery in a specific project area. For youth who struggle with reading, written 4-H materials prove inadequate in addressing the needs of the learner. This article proposes a new delivery method of 4-H educational material designed to create a more inclusive and…

  11. Quantitative characterisation of audio data by ordinal symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Aschenbrenner, T.; Monetti, R.; AmigĂł, J. M.; Bunk, W.

    2013-06-01

    Ordinal symbolic dynamics has developed into a valuable method to describe complex systems. Recently, using the concept of transcripts, the coupling behaviour of systems was assessed, combining the properties of the symmetric group with information theoretic ideas. In this contribution, methods from the field of ordinal symbolic dynamics are applied to the characterisation of audio data. Coupling complexity between frequency bands of solo violin music, as a fingerprint of the instrument, is used for classification purposes within a support vector machine scheme. Our results suggest that coupling complexity is able to capture essential characteristics, sufficient to distinguish among different violins.

  12. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  13. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  14. Analysis of musical expression in audio signals

    NASA Astrophysics Data System (ADS)

    Dixon, Simon

    2003-01-01

    In western art music, composers communicate their work to performers via a standard notation which specificies the musical pitches and relative timings of notes. This notation may also include some higher level information such as variations in the dynamics, tempo and timing. Famous performers are characterised by their expressive interpretation, the ability to convey structural and emotive information within the given framework. The majority of work on audio content analysis focusses on retrieving score-level information; this paper reports on the extraction of parameters describing the performance, a task which requires a much higher degree of accuracy. Two systems are presented: BeatRoot, an off-line beat tracking system which finds the times of musical beats and tracks changes in tempo throughout a performance, and the Performance Worm, a system which provides a real-time visualisation of the two most important expressive dimensions, tempo and dynamics. Both of these systems are being used to process data for a large-scale study of musical expression in classical and romantic piano performance, which uses artificial intelligence (machine learning) techniques to discover fundamental patterns or principles governing expressive performance.

  15. Audio frequency analysis in mobile phones

    NASA Astrophysics Data System (ADS)

    MunguĂ­a Aguilar, Horacio

    2016-01-01

    A new experiment using mobile phones is proposed in which its audio frequency response is analyzed using the audio port for inputting external signal and getting a measurable output. This experiment shows how the limited audio bandwidth used in mobile telephony is the main cause of the poor speech quality in this service. A brief discussion is given about the relationship between voice bandwidth and voice quality.

  16. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  17. Digital Audio Radio Field Tests

    NASA Technical Reports Server (NTRS)

    Hollansworth, James E.

    1997-01-01

    Radio history continues to be made at the NASA Lewis Research Center with the beginning of phase two of Digital Audio Radio testing conducted by the Consumer Electronic Manufacturers Association (a sector of the Electronic Industries Association and the National Radio Systems Committee) and cosponsored by the Electronic Industries Association and the National Association of Broadcasters. The bulk of the field testing of the four systems should be complete by the end of October 1996, with results available soon thereafter. Lewis hosted phase one of the testing process, which included laboratory testing of seven proposed digital audio radio systems and modes (see the following table). Two of the proposed systems operate in two modes, thus making a total of nine systems for testing. These nine systems are divided into the following types of transmission: in-band on channel (IBOC), in-band adjacent channel (IBAC), and new bands - the L-band (1452 to 1492 MHz) and the S-band (2310 to 2360 MHz).

  18. Plasmon-assisted audio recording.

    PubMed

    Chen, Hao; Bhuiya, Abdul M; Ding, Qing; Toussaint, Kimani C

    2015-01-01

    We present the first demonstration of the recording of optically encoded audio onto a plasmonic nanostructure. Analogous to the "optical sound" approach used in the early twentieth century to store sound on photographic film, we show that arrays of gold, pillar-supported bowtie nanoantennas could be used in a similar fashion to store sound information that is transferred via an amplitude modulated optical signal to the near field of an optical microscope. Retrieval of the audio information is achieved using standard imaging optics. We demonstrate that the sound information can be stored either as time-varying waveforms or in the frequency domain as the corresponding amplitude and phase spectra. A "plasmonic musical keyboard" comprising of 8 basic musical notes is constructed and used to play a short song. For comparison, we employ the correlation coefficient, which reveals that original and retrieved sound files are similar with maximum and minimum values of 0.995 and 0.342, respectively. We also show that the pBNAs could be used for basic signal processing by ablating unwanted frequency components on the nanostructure thereby enabling physical notch filtering of these components. Our work introduces a new application domain for plasmonic nanoantennas and experimentally verifies their potential for information processing. PMID:25773401

  19. Enhancing Manual Scan Registration Using Audio Cues

    NASA Astrophysics Data System (ADS)

    Ntsoko, T.; Sithole, G.

    2014-04-01

    Indoor mapping and modelling requires that acquired data be processed by editing, fusing, formatting the data, amongst other operations. Currently the manual interaction the user has with the point cloud (data) while processing it is visual. Visual interaction does have limitations, however. One way of dealing with these limitations is to augment audio in point cloud processing. Audio augmentation entails associating points of interest in the point cloud with audio objects. In coarse scan registration, reverberation, intensity and frequency audio cues were exploited to help the user estimate depth and occupancy of space of points of interest. Depth estimations were made reliably well when intensity and frequency were both used as depth cues. Coarse changes of depth could be estimated in this manner. The depth between surfaces can therefore be estimated with the aid of the audio objects. Sound reflections of an audio object provided reliable information of the object surroundings in some instances. For a point/area of interest in the point cloud, these reflections can be used to determine the unseen events around that point/area of interest. Other processing techniques could benefit from this while other information is estimated using other audio cues like binaural cues and Head Related Transfer Functions. These other cues could be used in position estimations of audio objects to aid in problems such as indoor navigation problems.

  20. Digital Audio: A Sound Design Element.

    ERIC Educational Resources Information Center

    Barron, Ann; Varnadoe, Susan

    1992-01-01

    Discussion of incorporating audio into videodiscs for multimedia educational applications highlights a project developed for the Navy that used digital audio in an interactive video delivery system (IVDS) for training sonar operators. Storage constraints with videodiscs are explained, design requirements for the IVDS are described, and production…

  1. Audio-Tutorial Instruction in Medicine.

    ERIC Educational Resources Information Center

    Boyle, Gloria J.; Herrick, Merlyn C.

    This progress report concerns an audio-tutorial approach used at the University of Missouri-Columbia School of Medicine. Instructional techniques such as slide-tape presentations, compressed speech audio tapes, computer-assisted instruction (CAI), motion pictures, television, microfiche, and graphic and printed materials have been implemented,…

  2. Internet Audio Products (3/3)

    ERIC Educational Resources Information Center

    Schwartz, Linda; de Schutter, Adrienne; Fahrni, Patricia; Rudolph, Jim

    2004-01-01

    Two contrasting additions to the online audio market are reviewed: "iVocalize", a browser-based audio-conferencing software, and "Skype", a PC-to-PC Internet telephone tool. These products are selected for review on the basis of their success in gaining rapid popular attention and usage during 2003-04. The "iVocalize" review emphasizes the…

  3. Digital Audio Sampling for Film and Video.

    ERIC Educational Resources Information Center

    Stanton, Michael J.

    Digital audio sampling is explained, and some of its implications in digital sound applications are discussed. Digital sound equipment is rapidly replacing analog recording devices as the state-of-the-art in audio technology. The philosophy of digital recording involves doing away with the continuously variable analog waveforms and turning the…

  4. The HDTV digital audio matrix

    NASA Astrophysics Data System (ADS)

    Mason, A. J.

    Multichannel sound systems are being studied as part of the Eureka 95 and Radio-communication Bureau TG10-1 investigations into high definition television. One emerging sound system has five channels; three at the front and two at the back. This raises some compatibility issues. The listener might have only, say, two loudspeakers or the material to be broadcast may have fewer than five channels. The problem is how best to produce a set of signals to be broadcast, which is suitable for all listeners, from those that are available. To investigate this area, a device has been designed and built which has six input channels and six output channels. Each output signal is a linear combination of the input signals. The inputs and outputs are in AES/EBU digital audio format using BBC-designed AESIC chips. The matrix operation, to produce the six outputs from the six inputs, is performed by a Motorola DSP56001. The user interface and 'housekeeping' is managed by a T222 transputer. The operator of the matrix uses a VDU to enter sets of coefficients and a rotary switch to select which set to use. A set of analog controls is also available and is used to control operations other than the simple compatibility matrixing. The matrix has been very useful for simple tasks: mixing a stereo signal into mono, creating a stereo signal from a mono signal, applying a fixed gain or attenuation to a signal, exchanging the A and B channels of an AES/EBU bitstream, and so on. These are readily achieved using simple sets of coefficients. Additions to the user interface software have led to several more sophisticated applications which still consist of a matrix operation. Different multichannel panning laws have been evaluated. The analog controls adjust the panning; the audio signals are processed digitally using a matrix operation. A digital SoundField microphone decoder has also been implemented. audio matrix is such that it can be applied to a wide variety of signal processing tasks. -The combination of a dedicated DSP chip programmed in assembly language for speed of operation and a general purpose processor for user interface tasks programmed in a high level language has been found to be extremely useful.

  5. Collusion-Resistant Audio Fingerprinting System in the Modulated Complex Lapped Transform Domain

    PubMed Central

    Garcia-Hernandez, Jose Juan; Feregrino-Uribe, Claudia; Cumplido, Rene

    2013-01-01

    Collusion-resistant fingerprinting paradigm seems to be a practical solution to the piracy problem as it allows media owners to detect any unauthorized copy and trace it back to the dishonest users. Despite the billionaire losses in the music industry, most of the collusion-resistant fingerprinting systems are devoted to digital images and very few to audio signals. In this paper, state-of-the-art collusion-resistant fingerprinting ideas are extended to audio signals and the corresponding parameters and operation conditions are proposed. Moreover, in order to carry out fingerprint detection using just a fraction of the pirate audio clip, block-based embedding and its corresponding detector is proposed. Extensive simulations show the robustness of the proposed system against average collusion attack. Moreover, by using an efficient Fast Fourier Transform core and standard computer machines it is shown that the proposed system is suitable for real-world scenarios. PMID:23762455

  6. High-Fidelity Piezoelectric Audio Device

    NASA Technical Reports Server (NTRS)

    Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.

    2003-01-01

    ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.

  7. Cluster: Metals. Course: Machine Shop. Research Project.

    ERIC Educational Resources Information Center

    Sanford - Lee County Schools, NC.

    The set of 13 units is designed for use with an instructor in actual machine shop practice and is also keyed to audio visual and textual materials. Each unit contains a series of task packages which: specify prerequisites within the series (minimum is Unit 1); provide a narrative rationale for learning; list both general and specific objectives in…

  8. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  9. Virtual Microphones for Multichannel Audio Resynthesis

    NASA Astrophysics Data System (ADS)

    Mouchtaris, Athanasios; Narayanan, Shrikanth S.; Kyriakakis, Chris

    2003-12-01

    Multichannel audio offers significant advantages for music reproduction, including the ability to provide better localization and envelopment, as well as reduced imaging distortion. On the other hand, multichannel audio is a demanding media type in terms of transmission requirements. Often, bandwidth limitations prohibit transmission of multiple audio channels. In such cases, an alternative is to transmit only one or two reference channels and recreate the rest of the channels at the receiving end. Here, we propose a system capable of synthesizing the required signals from a smaller set of signals recorded in a particular venue. These synthesized "virtual" microphone signals can be used to produce multichannel recordings that accurately capture the acoustics of that venue. Applications of the proposed system include transmission of multichannel audio over the current Internet infrastructure and, as an extension of the methods proposed here, remastering existing monophonic and stereophonic recordings for multichannel rendering.

  10. A Study of Audio Tape: Part II

    ERIC Educational Resources Information Center

    Reen, Noel K.

    1975-01-01

    To evaluate reel audio tape, tests were performed to identify: signal-to-noise ratio, total harmonic distortion, dynamic response, frequency response, biased and virgin tape noise, dropout susceptibility and oxide coating uniformity. (SCC)

  11. High quality scalable audio codec

    NASA Astrophysics Data System (ADS)

    Kim, Miyoung; Oh, Eunmi; Kim, JungHoe

    2007-09-01

    The MPEG-4 BSAC (Bit Sliced Arithmetic Coding) is a fine-grain scalable codec with layered structure which consists of a single base-layer and several enhancement layers. The scalable functionality allows us to decode the subsets of a full bitstream and to deliver audio contents adaptively under conditions of heterogeneous network and devices, and user interaction. This bitrate scalability can be provided at the cost of high frequency components. It means that the decoded output of BSAC sounds muffled as the transmitted layers become less and less due to deprived conditions of network and devices. The goal of the proposed technology is to compensate the missing high frequency components, while maintaining the fine grain scalability of BSAC. This paper describes the integration of SBR (Spectral Bandwidth Replication) tool to existing MPEG-4 BSAC. Listening test results show that the sound quality of BSAC is improved when the full bitstream is truncated for lower bitrates, and this quality is comparable to that of BSAC using SBR tool without truncation at the same bitrate.

  12. Interactive Learning of Spoken Words and Their Meanings Through an Audio-Visual Interface

    NASA Astrophysics Data System (ADS)

    Iwahashi, Naoto

    This paper presents a new interactive learning method for spoken word acquisition through human-machine audio-visual interfaces. During the course of learning, the machine makes a decision about whether an orally input word is a word in the lexicon the machine has learned, using both speech and visual cues. Learning is carried out on-line, incrementally, based on a combination of active and unsupervised learning principles. If the machine judges with a high degree of confidence that its decision is correct, it learns the statistical models of the word and a corresponding image category as its meaning in an unsupervised way. Otherwise, it asks the user a question in an active way. The function used to estimate the degree of confidence is also learned adaptively on-line. Experimental results show that the combination of active and unsupervised learning principles enables the machine and the user to adapt to each other, which makes the learning process more efficient.

  13. The Effect Of 3D Audio And Other Audio Techniques On Virtual Reality Experience.

    PubMed

    Brinkman, Willem-Paul; Hoekstra, Allart R D; van Egmond, René

    2015-01-01

    Three studies were conducted to examine the effect of audio on people's experience in a virtual world. The first study showed that people could distinguish between mono, stereo, Dolby surround and 3D audio of a wasp. The second study found significant effects for audio techniques on people's self-reported anxiety, presence, and spatial perception. The third study found that adding sound to a visual virtual world had a significant effect on people's experience (including heart rate), while it found no difference in experience between stereo and 3D audio. PMID:26799877

  14. TRAINING TYPISTS IN THE INDUSTRIAL ENVIRONMENT--PRELIMINARY REPORT OF A PROTOTYPE SYSTEM OF SIMULTANEOUS, MULTILEVEL, MULTIPHASIC AUDIO PROGRAMMING.

    ERIC Educational Resources Information Center

    ADAMS, CHARLES F.

    IN 1965 TEN NEGRO AND PUERTO RICAN GIRLS BEGAN CLERICAL TRAINING IN THE NATIONAL ASSOCIATION OF MANUFACTURERS (NAM) TYPING LABORATORY I (TEELAB-I), A PILOT PROJECT TO DEVELOP A SYSTEM OF TRAINING TYPISTS WITHIN THE INDUSTRIAL ENVIRONMENT. THE INITIAL SYSTEM, AN ADAPTATION OF GREGG AUDIO MATERIALS TO A MACHINE TECHNOLOGY, TAUGHT ACCURACY, SPEED…

  15. TRAINING TYPISTS IN THE INDUSTRIAL ENVIRONMENT--PRELIMINARY REPORT OF A PROTOTYPE SYSTEM OF SIMULTANEOUS, MULTILEVEL, MULTIPHASIC AUDIO PROGRAMMING.

    ERIC Educational Resources Information Center

    ADAMS, CHARLES F.

    IN 1965 TEN NEGRO AND PUERTO RICAN GIRLS BEGAN CLERICAL TRAINING IN THE NATIONAL ASSOCIATION OF MANUFACTURERS (NAM) TYPING LABORATORY I (TEELAB-I), A PILOT PROJECT TO DEVELOP A SYSTEM OF TRAINING TYPISTS WITHIN THE INDUSTRIAL ENVIRONMENT. THE INITIAL SYSTEM, AN ADAPTATION OF GREGG AUDIO MATERIALS TO A MACHINE TECHNOLOGY, TAUGHT ACCURACY, SPEED…

  16. Digital Multicasting of Multiple Audio Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell; Bullock, John

    2007-01-01

    The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.

  17. Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study

    ERIC Educational Resources Information Center

    Romero-Fresco, Pablo; Fryer, Louise

    2013-01-01

    Introduction: Time constraints limit the quantity and type of information conveyed in audio description (AD) for films, in particular the cinematic aspects. Inspired by introductory notes for theatre AD, this study developed audio introductions (AIs) for "Slumdog Millionaire" and "Man on Wire." Each AI comprised 10 minutes of…

  18. Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study

    ERIC Educational Resources Information Center

    Romero-Fresco, Pablo; Fryer, Louise

    2013-01-01

    Introduction: Time constraints limit the quantity and type of information conveyed in audio description (AD) for films, in particular the cinematic aspects. Inspired by introductory notes for theatre AD, this study developed audio introductions (AIs) for "Slumdog Millionaire" and "Man on Wire." Each AI comprised 10 minutes of…

  19. Multimodal audio guide for museums and exhibitions

    NASA Astrophysics Data System (ADS)

    Gebbensleben, Sandra; Dittmann, Jana; Vielhauer, Claus

    2006-02-01

    In our paper we introduce a new Audio Guide concept for exploring buildings, realms and exhibitions. Actual proposed solutions work in most cases with pre-defined devices, which users have to buy or borrow. These systems often go along with complex technical installations and require a great degree of user training for device handling. Furthermore, the activation of audio commentary related to the exhibition objects is typically based on additional components like infrared, radio frequency or GPS technology. Beside the necessity of installation of specific devices for user location, these approaches often only support automatic activation with no or limited user interaction. Therefore, elaboration of alternative concepts appears worthwhile. Motivated by these aspects, we introduce a new concept based on usage of the visitor's own mobile smart phone. The advantages in our approach are twofold: firstly the Audio Guide can be used in various places without any purchase and extensive installation of additional components in or around the exhibition object. Secondly, the visitors can experience the exhibition on individual tours only by uploading the Audio Guide at a single point of entry, the Audio Guide Service Counter, and keeping it on her or his personal device. Furthermore, since the user usually is quite familiar with the interface of her or his phone and can thus interact with the application device easily. Our technical concept makes use of two general ideas for location detection and activation. Firstly, we suggest an enhanced interactive number based activation by exploiting the visual capabilities of modern smart phones and secondly we outline an active digital audio watermarking approach, where information about objects are transmitted via an analog audio channel.

  20. Audio stream classification for multimedia database search

    NASA Astrophysics Data System (ADS)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  1. High performance MPEG-audio decoder IC

    NASA Technical Reports Server (NTRS)

    Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.

    1993-01-01

    The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.

  2. Nonlinear dynamic macromodeling techniques for audio systems

    NASA Astrophysics Data System (ADS)

    Ogrodzki, Jan; Bieńkowski, Piotr

    2015-09-01

    This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.

  3. Free audio archive brings legends to life

    NASA Astrophysics Data System (ADS)

    Banks, Michael

    2009-09-01

    A free online archive that contains thousands of hours of interviews with physicists and astronomers has been launched by the American Institute of Physics (AIP). The archive currently contains full transcripts of interviews with over 400 physicists that were recorded by historians and journalists from 1960 onwards as well as selected parts of the audio files. By the end of the year, the archive, belonging to the AIP's Niels Bohr Library and Archives in Washington, DC, will contain 500 online transcripts and over a dozen audio interviews.

  4. Enhancing Navigation Skills through Audio Gaming

    PubMed Central

    Sánchez, Jaime; Sáenz, Mauricio; Pascual-Leone, Alvaro; Merabet, Lotfi

    2014-01-01

    We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks. PMID:25505796

  5. Audio data hiding using perceptual masking effect of HAS

    NASA Astrophysics Data System (ADS)

    Cao, Hanqiang; Zhang, Xinyu; Cao, Chao; Wei, Fang

    2007-11-01

    Audio data hiding is an important branch of information hiding technology. In this paper, a novel digital audio data hiding scheme, which hides secret message into audio signals, including telephone speech, wideband speech, and wideband audio, is proposed. The advantage of this scheme is its fully utility of the perceptual masking effect of Human Audio System (HAS). Before the secret message embedded into the host signal, it is modulated according to the perceptual masking characteristic of the host signal. Therefore, it is not easily to detect the hiding message in it. This scheme also uses the cryptographic techniques before hiding message into host audio signal to ensure security. Moreover, the extraction of the desired message does not need the host audio signal because of the use of the pseudorandom sequence. Experimental results show that the embedded audio signal is not easily detected and the bit error of the blind extracted message is small.

  6. Agency Video, Audio and Imagery Library

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2015-01-01

    The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.

  7. Providing Students with Formative Audio Feedback

    ERIC Educational Resources Information Center

    Brearley, Francis Q.; Cullen, W. Rod

    2012-01-01

    The provision of timely and constructive feedback is increasingly challenging for busy academics. Ensuring effective student engagement with feedback is equally difficult. Increasingly, studies have explored provision of audio recorded feedback to enhance effectiveness and engagement with feedback. Few, if any, of these focus on purely formative…

  8. An ESL Audio-Script Writing Workshop

    ERIC Educational Resources Information Center

    Miller, Carla

    2012-01-01

    The roles of dialogue, collaborative writing, and authentic communication have been explored as effective strategies in second language writing classrooms. In this article, the stages of an innovative, multi-skill writing method, which embeds students' personal voices into the writing process, are explored. A 10-step ESL Audio Script Writing Model…

  9. Sound for Film: Audio Education for Filmmakers.

    ERIC Educational Resources Information Center

    Lazar, Wanda

    1998-01-01

    Identifies the specific, unique, and important elements of audio education required by film professionals. Presents a model unit to be included in a film studies program, either as a separate course or as part of a film production or introduction to film course. Offers a model syllabus for such a course or unit on sound in film. (SR)

  10. Providing Students with Formative Audio Feedback

    ERIC Educational Resources Information Center

    Brearley, Francis Q.; Cullen, W. Rod

    2012-01-01

    The provision of timely and constructive feedback is increasingly challenging for busy academics. Ensuring effective student engagement with feedback is equally difficult. Increasingly, studies have explored provision of audio recorded feedback to enhance effectiveness and engagement with feedback. Few, if any, of these focus on purely formative…

  11. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 9 2014-10-01 2014-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Filming, Photography, and Light...

  12. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 9 2013-10-01 2013-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Filming, Photography, and Light...

  13. Audio/Visual Ratios in Commercial Filmstrips.

    ERIC Educational Resources Information Center

    Gulliford, Nancy L.

    Developed by the Westinghouse Electric Corporation, Video Audio Compressed (VIDAC) is a compressed time, variable rate, still picture television system. This technology made it possible for a centralized library of audiovisual materials to be transmitted over a television channel in very short periods of time. In order to establish specifications…

  14. Building Digital Audio Preservation Infrastructure and Workflows

    ERIC Educational Resources Information Center

    Young, Anjanette; Olivieri, Blynne; Eckler, Karl; Gerontakos, Theodore

    2010-01-01

    In 2009 the University of Washington (UW) Libraries special collections received funding for the digital preservation of its audio indigenous language holdings. The university libraries, where the authors work in various capacities, had begun digitizing image and text collections in 1997. Because of this, at the onset of the project, workflows (a…

  15. Spanish for Agricultural Purposes: The Audio Program.

    ERIC Educational Resources Information Center

    Mainous, Bruce H.; And Others

    The manual is meant to accompany and supplement the basic manual and to serve as support to the audio component of "Spanish for Agricultural Purposes," a one-semester course for North American agriculture specialists preparing to work in Latin America, consists of exercises to supplement readings presented in the course's basic manual and to…

  16. Building Digital Audio Preservation Infrastructure and Workflows

    ERIC Educational Resources Information Center

    Young, Anjanette; Olivieri, Blynne; Eckler, Karl; Gerontakos, Theodore

    2010-01-01

    In 2009 the University of Washington (UW) Libraries special collections received funding for the digital preservation of its audio indigenous language holdings. The university libraries, where the authors work in various capacities, had begun digitizing image and text collections in 1997. Because of this, at the onset of the project, workflows (a…

  17. Improving Audio Quality in Distance Learning Applications.

    ERIC Educational Resources Information Center

    Richardson, Craig H.

    This paper discusses common causes of problems encountered with audio systems in distance learning networks and offers practical suggestions for correcting the problems. Problems and discussions are divided into nine categories: (1) acoustics, including reverberant classrooms leading to distorted or garbled voices, as well as one-dimensional audio…

  18. Structuring Broadcast Audio for Information Access

    NASA Astrophysics Data System (ADS)

    Gauvain, Jean-Luc; Lamel, Lori

    2003-12-01

    One rapidly expanding application area for state-of-the-art speech recognition technology is the automatic processing of broadcast audiovisual data for information access. Since much of the linguistic information is found in the audio channel, speech recognition is a key enabling technology which, when combined with information retrieval techniques, can be used for searching large audiovisual document collections. Audio indexing must take into account the specificities of audio data such as needing to deal with the continuous data stream and an imperfect word transcription. Other important considerations are dealing with language specificities and facilitating language portability. At Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), broadcast news transcription systems have been developed for seven languages: English, French, German, Mandarin, Portuguese, Spanish, and Arabic. The transcription systems have been integrated into prototype demonstrators for several application areas such as audio data mining, structuring audiovisual archives, selective dissemination of information, and topic tracking for media monitoring. As examples, this paper addresses the spoken document retrieval and topic tracking tasks.

  19. Study of audio speakers containing ferrofluid.

    PubMed

    Rosensweig, R E; Hirota, Y; Tsuda, S; Raj, K

    2008-05-21

    This work validates a method for increasing the radial restoring force on the voice coil in audio speakers containing ferrofluid. In addition, a study is made of factors influencing splash loss of the ferrofluid due to shock. Ferrohydrodynamic analysis is employed throughout to model behavior, and predictions are compared to experimental data. PMID:21694276

  20. Using Audio and the Language Laboratory.

    ERIC Educational Resources Information Center

    Helot, Christine

    The role of the language laboratory in current language teaching and learning is discussed. Four main aspects of audio technology and its relationship to language learning are covered: (1) the technological aspect: what a language lab is and what kinds of labs are available in Ireland; (2) the research aspect: what kind of research is being…

  1. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal...

  2. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital...

  3. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device...

  4. To Make a Long Story Short: Abridged Audio at 10.

    ERIC Educational Resources Information Center

    Annichiarico, Mark

    1996-01-01

    Examines the history of abridged audio publishing 10 years after the formation of the Audio Publishers Association. Topics include abridged versus unabridged versions for bookstores and libraries; vendors and publishers; future possibilities for CDs and DVD (Digital Versatile Disc); and audio leasing for libraries. (LRW)

  5. AudioMUD: a multiuser virtual environment for blind people.

    PubMed

    Sánchez, Jaime; Hassler, Tiago

    2007-03-01

    A number of virtual environments have been developed during the last years. Among them there are some applications for blind people based on different type of audio, from simple sounds to 3-D audio. In this study, we pursued a different approach. We designed AudioMUD by using spoken text to describe the environment, navigation, and interaction. We have also introduced some collaborative features into the interaction between blind users. The core of a multiuser MUD game is a networked textual virtual environment. We developed AudioMUD by adding some collaborative features to the basic idea of a MUD and placed a simulated virtual environment inside the human body. This paper presents the design and usability evaluation of AudioMUD. Blind learners were motivated when interacted with AudioMUD and helped to improve the interaction through audio and interface design elements. PMID:17436871

  6. Database machines

    NASA Technical Reports Server (NTRS)

    Stiefel, M. L.

    1983-01-01

    The functions and performance characteristics of data base machines (DBM), including machines currently being studied in research laboratories and those currently offered on a commerical basis are discussed. The cost/benefit considerations that must be recognized in selecting a DBM are discussed, as well as the future outlook for such machines.

  7. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  8. Digital audio and video broadcasting by satellite

    NASA Astrophysics Data System (ADS)

    Yoshino, Takehiko

    In parallel with the progress of the practical use of satellite broadcasting and Hi-Vision or high-definition television technologies, research activities are also in progress to replace the conventional analog broadcasting services with a digital version. What we call 'digitalization' is not a mere technical matter but an important subject which will help promote multichannel or multimedia applications and, accordingly, can change the old concept of mass media, such as television or radio. NHK Science and Technical Research Laboratories has promoted studies of digital bandwidth compression, transmission, and application techniques. The following topics are covered: the trend of digital broadcasting; features of Integrated Services Digital Broadcasting (ISDB); compression encoding and transmission; transmission bit rate in 12 GHz band; number of digital TV transmission channels; multichannel pulse code modulation (PCM) audio broadcasting system via communication satellite; digital Hi-Vision broadcasting; and development of digital audio broadcasting (DAB) for mobile reception in Japan.

  9. Capacity-optimized mp2 audio watermarking

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Dittmann, Jana

    2003-06-01

    Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.

  10. An active headrest for personal audio.

    PubMed

    Elliott, Stephen J; Jones, Matthew

    2006-05-01

    There is an increasing need for personal audio systems, which generate sounds that are clearly audible to one listener but are not audible to other listeners nearby. Of particular interest in this paper are listeners sitting in adjacent seats in aircraft or land vehicles. Although personal audio could then be achieved with headsets, it would be safer and more comfortable if loudspeakers in the seat headrests could be actively controlled to generate an acceptable level of acoustic isolation. In this paper a number of approaches to this problem are investigated, but the most successful involves a pair of loudspeakers on one side of the headrest, driven together to reproduce an audio signal for a listener in that seat and also to attenuate the pressures in the adjacent seat. The performance of this technique is investigated using simple analytic models and also with a practical implementation, tested in an anechoic chamber and a small room. It is found that significant attenuations, of between 5 and 25 dB, can be achieved in the crosstalk between the seats for frequencies up to about 2 kHz. PMID:16708929

  11. Audio processing technology for law enforcement

    NASA Astrophysics Data System (ADS)

    Walter, Sharon M.; Cofano, Maria; Ratley, Roy J.

    1999-01-01

    The Air Force Research Laboratory Multi-Sensor Exploitation Branch (AFRL/IFEC) has been a Department of Defense leader in research and development (R&D) in speech and audio processing for over 25 years. Their primary thrust in these R&D areas has focused on developing technology to improve the collection, handling, identification, and intelligibility of military communication signals. The National Law Enforcement and Corrections Technology Center for the Northeast (NLECTC-NE) is collocated with the AFRL Rome Research Sited at the Griffiss Technology park in upstate New York. The NLECTC-NE supports sixteen (16) states in the northeast sector of the United States, and is funded and supported by the National Institute of Justice (NIJ). Since the inception of the NLECTC-NE in 1995, the AFRL Rome Research Site has expanded the military applications of their expertise to address law enforcement and corrections requirements. AFRL/IFEC's speech and audio processing technology is unique and particularly appropriate for application to law enforcement requirements. It addresses the similar military need for time-critical decisions and actions, operation within noisy environments, and use by uncooperative speakers in tactical, real-time applications. Audio and speech processing technology for both application domains must also often deal with short utterance communications (less than five seconds of speech) and transmission-to-transmission channel variability.

  12. Interaction with Machine Improvisation

    NASA Astrophysics Data System (ADS)

    Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo

    We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.

  13. Text-to-Speech and Reading While Listening: Reading Support for Individuals with Severe Traumatic Brain Injury

    ERIC Educational Resources Information Center

    Harvey, Judy

    2013-01-01

    Individuals with severe traumatic brain injury (TBI) often have reading challenges. They maintain or reestablish basic decoding and word recognition skills following injury, but problems with reading comprehension often persist. Practitioners have the potential to accommodate struggling readers by changing the presentational mode of text in a…

  14. Supported eText: Effects of Text-to-Speech on Access and Achievement for High School Students with Disabilities

    ERIC Educational Resources Information Center

    Izzo, Margo Vreeburg; Yurick, Amanda; McArrell, Bianca

    2009-01-01

    Students with disabilities often lack the skills required to access the general education curriculum and achieve success in school and postschool environments. Evidence suggests that using assistive technologies such as digital texts and translational supports enhances outcomes for these students (Anderson-Inman & Horney, 2007). The purpose of the…

  15. Text-to-Speech and Reading While Listening: Reading Support for Individuals with Severe Traumatic Brain Injury

    ERIC Educational Resources Information Center

    Harvey, Judy

    2013-01-01

    Individuals with severe traumatic brain injury (TBI) often have reading challenges. They maintain or reestablish basic decoding and word recognition skills following injury, but problems with reading comprehension often persist. Practitioners have the potential to accommodate struggling readers by changing the presentational mode of text in a…

  16. A digital audio/video interleaving system. [for Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Richards, R. W.

    1978-01-01

    A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.

  17. The Digital Audio Editor as a Teaching and Laboratory Tool

    NASA Astrophysics Data System (ADS)

    Latta, Gregory

    2001-10-01

    Digital audio editors such as Software Audio Workshop and Cool Edit Pro are powerful tools used in the radio and audio recording fields for editing digital audio. However, they are also powerful tools in the physics classroom and laboratory. During this presentation the author will show how a digital audio editor, combined with a library of audio .wav files produced by the author as part of sabbatical work, can be used to: 1. demonstrate quantitatively and qualitatively the relationship between the decibel, sound intensity, and loudness perception, 2. demonstrate quantitatively and qualitatively the relationship between frequency and pitch perception, 3. perform additive and subtractive sound synthesis, 4. demonstrate comb filtering, 5. demonstrate constructive and destructive interference, and 6. turn the computer into an accurate signal generator (sine wave, square wave, etc.) with a frequency resolution of 1Hz. Availability of the required software and .wav file library will also be discussed.

  18. A haptic-inspired audio approach for structural health monitoring decision-making

    NASA Astrophysics Data System (ADS)

    Mao, Zhu; Todd, Michael; Mascareñas, David

    2015-03-01

    Haptics is the field at the interface of human touch (tactile sensation) and classification, whereby tactile feedback is used to train and inform a decision-making process. In structural health monitoring (SHM) applications, haptic devices have been introduced and applied in a simplified laboratory scale scenario, in which nonlinearity, representing the presence of damage, was encoded into a vibratory manual interface. In this paper, the "spirit" of haptics is adopted, but here ultrasonic guided wave scattering information is transformed into audio (rather than tactile) range signals. After sufficient training, the structural damage condition, including occurrence and location, can be identified through the encoded audio waveforms. Different algorithms are employed in this paper to generate the transformed audio signals and the performance of each encoding algorithms is compared, and also compared with standard machine learning classifiers. In the long run, the haptic decision-making is aiming to detect and classify structural damages in a more rigorous environment, and approaching a baseline-free fashion with embedded temperature compensation.

  19. A content-based digital audio watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Liping; Zhao, Yi; Xu, Wen Li

    2015-12-01

    Digital audio watermarking embeds inaudible information into digital audio data for the purposes of copyright protection, ownership verification, covert communication, and/or auxiliary data carrying. In this paper, we present a novel watermarking scheme to embed a meaningful gray image into digital audio by quantizing the wavelet coefficients (using integer lifting wavelet transform) of audio samples. Our audio-dependent watermarking procedure directly exploits temporal and frequency perceptual masking of the human auditory system (HAS) to guarantee that the embedded watermark image is inaudible and robust. The watermark is constructed by utilizing still image compression technique, breaking each audio clip into smaller segments, selecting the perceptually significant audio segments to wavelet transform, and quantizing the perceptually significant wavelet coefficients. The proposed watermarking algorithm can extract the watermark image without the help from the original digital audio signals. We also demonstrate the robustness of that watermarking procedure to audio degradations and distortions, e.g., those that result from noise adding, MPEG compression, low pass filtering, resampling, and requantization.

  20. ISDN audio-graphics teleconferencing systems

    NASA Astrophysics Data System (ADS)

    Tanaka, Kiyoto; Oyaizu, Ikuro

    1993-10-01

    An audio-graphics teleconferencing system has been developed that uses ordinary personal computers interconnected over a basic rate (2B + D) ISDN line. The system supports high-speed transmission of 200-dpi resolution documents read in by an optical scanner and presented on the displays of the conference participants. While looking at the same material, the conferees can interactively converse and make handwritten notations on the document via an LCD tablet for all the participants to see. We describe the configuration and performance of the system, focusing mainly on the ISDN based multimedia transmission method and the method for reducing and enlarging binary images.

  1. Instructional Audio Guidelines: Four Design Principles to Consider for Every Instructional Audio Design Effort

    ERIC Educational Resources Information Center

    Carter, Curtis W.

    2012-01-01

    This article contends that instructional designers and developers should attend to four particular design principles when creating instructional audio. Support for this view is presented by referencing the limited research that has been done in this area, and by indicating how and why each of the four principles is important to the design process.…

  2. Instructional Audio Guidelines: Four Design Principles to Consider for Every Instructional Audio Design Effort

    ERIC Educational Resources Information Center

    Carter, Curtis W.

    2012-01-01

    This article contends that instructional designers and developers should attend to four particular design principles when creating instructional audio. Support for this view is presented by referencing the limited research that has been done in this area, and by indicating how and why each of the four principles is important to the design process.…

  3. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    NASA Astrophysics Data System (ADS)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution function could be approximated to a normal distribution function. A statistical analysis was also performed to investigate if a patient's physical, tumor or general characteristics played a role in identifying whether he/she responded positively to the coaching type---signified by a reduction in the variability of respiratory motion. The analysis demonstrated that, although there were some characteristics like disease type and dose per fraction that were significant with respect to time-independent analysis, there were no significant time trends observed for the inter-session or intra-session analysis. Based on patient feedback with the existing audio-visual biofeedback system used for the study and research performed on other feedback systems, an improved audio-visual biofeedback system was designed. It is hoped the widespread clinical implementation of audio-visual biofeedback for radiotherapy will improve the accuracy of lung cancer radiotherapy.

  4. Machine Learning.

    ERIC Educational Resources Information Center

    Kirrane, Diane E.

    1990-01-01

    As scientists seek to develop machines that can "learn," that is, solve problems by imitating the human brain, a gold mine of information on the processes of human learning is being discovered, expert systems are being improved, and human-machine interactions are being enhanced. (SK)

  5. Electric machine

    DOEpatents

    El-Refaie, Ayman Mohamed Fawzi; Reddy, Patel Bhageerath

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  6. Cortical Integration of Audio-Visual Information

    PubMed Central

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  7. Segmentation of expiratory and inspiratory sounds in baby cry audio recordings using hidden Markov models.

    PubMed

    Aucouturier, Jean-Julien; Nonaka, Yulri; Katahira, Kentaro; Okanoya, Kazuo

    2011-11-01

    The paper describes an application of machine learning techniques to identify expiratory and inspiration phases from the audio recording of human baby cries. Crying episodes were recorded from 14 infants, spanning four vocalization contexts in their first 12 months of age; recordings from three individuals were annotated manually to identify expiratory and inspiratory sounds and used as training examples to segment automatically the recordings of the other 11 individuals. The proposed algorithm uses a hidden Markov model architecture, in which state likelihoods are estimated either with Gaussian mixture models or by converting the classification decisions of a support vector machine. The algorithm yields up to 95% classification precision (86% average), and its ability generalizes over different babies, different ages, and vocalization contexts. The technique offers an opportunity to quantify expiration duration, count the crying rate, and other time-related characteristics of baby crying for screening, diagnosis, and research purposes over large populations of infants. PMID:22087925

  8. Effect of Audio vs. Video on Aural Discrimination of Vowels

    ERIC Educational Resources Information Center

    McCrocklin, Shannon

    2012-01-01

    Despite the growing use of media in the classroom, the effects of using of audio versus video in pronunciation teaching has been largely ignored. To analyze the impact of the use of audio or video training on aural discrimination of vowels, 61 participants (all students at a large American university) took a pre-test followed by two training…

  9. Beyond Podcasting: Creative Approaches to Designing Educational Audio

    ERIC Educational Resources Information Center

    Middleton, Andrew

    2009-01-01

    This paper discusses a university-wide pilot designed to encourage academics to creatively explore learner-centred applications for digital audio. Participation in the pilot was diverse in terms of technical competence, confidence and contextual requirements and there was little prior experience of working with digital audio. Many innovative…

  10. A Case Study on Audio Feedback with Geography Undergraduates

    ERIC Educational Resources Information Center

    Rodway-Dyer, Sue; Knight, Jasper; Dunne, Elizabeth

    2011-01-01

    Several small-scale studies have suggested that audio feedback can help students to reflect on their learning and to develop deep learning approaches that are associated with higher attainment in assessments. For this case study, Geography undergraduates were given audio feedback on a written essay assignment, alongside traditional written…

  11. Use of Video and Audio Texts in EFL Listening Test

    ERIC Educational Resources Information Center

    Basal, Ahmet; Gülözer, Kaine; Demir, Ibrahim

    2015-01-01

    The study aims to discover whether audio or video modality in a listening test is more beneficial to test takers. In this study, the posttest-only control group design was utilized and quantitative data were collected in order to measure participant performances concerning two types of modality (audio or video) in a listening test. The…

  12. Use of Audio Modification in Science Vocabulary Assessment

    ERIC Educational Resources Information Center

    Adiguzel, Tufan

    2011-01-01

    The purposes of this study were to examine the utilization of audio modification in vocabulary assessment in school subject areas, specifically in elementary science, and to present a web-based key vocabulary assessment tool for the elementary school level. Audio-recorded readings were used to replace independent student readings as the task…

  13. Using Audio Books to Improve Reading and Academic Performance

    ERIC Educational Resources Information Center

    Montgomery, Joel R.

    2009-01-01

    This article highlights significant research about what below grade-level reading means in middle school classrooms and suggests a tested approach to improve reading comprehension levels significantly by using audio books. The use of these audio books can improve reading and academic performance for both English language learners (ELLs) and for…

  14. Effective Use of Audio Media in Multimedia Presentations.

    ERIC Educational Resources Information Center

    Kerr, Brenda

    This paper emphasizes research-based reasons for adding audio to multimedia presentations. The first section summarizes suggestions from a review of research on the effectiveness of audio media when accompanied by other forms of media; types of research studies (e.g., evaluation, intra-medium, and aptitude treatment interaction studies) are also…

  15. Audio Podcasting in a Tablet PC-Enhanced Biochemistry Course

    ERIC Educational Resources Information Center

    Lyles, Heather; Robertson, Brian; Mangino, Michael; Cox, James R.

    2007-01-01

    This report describes the effects of making audio podcasts of all lectures in a large, basic biochemistry course promptly available to students. The audio podcasts complement a previously described approach in which a tablet PC is used to annotate PowerPoint slides with digital ink to produce electronic notes that can be archived. The fundamentals…

  16. Getting Started with CD Audio in HyperCard.

    ERIC Educational Resources Information Center

    Decker, Donald A.

    1992-01-01

    This article examines the use of the Voyager Compact Disk (CD) AudioStack to provide HyperCard stacks designed to promote language learning with the ability to play on common precisely specified portions of off-the-shelf audio compact disks in a CD-ROM drive. Four German and Russian HyperCard stacks are described and their construction outlined.…

  17. Audio Design: Creating Multi-sensory Images for the Mind.

    ERIC Educational Resources Information Center

    Ferrington, Gary

    1994-01-01

    Explores the concept of "theater of the mind" and discusses design factors in creating audio works that effectively stimulate mental pictures, including: narrative format in audio scripting; qualities of voice; use of concrete language; music; noise versus silence; and the creation of the illusion of space using monaural, stereophonic, and…

  18. An Audio Stream Redirector for the Ethernet Speaker

    ERIC Educational Resources Information Center

    Mandrekar, Ishan; Prevelakis, Vassilis; Turner, David Michael

    2004-01-01

    The authors have developed the "Ethernet Speaker" (ES), a network-enabled single board computer embedded into a conventional audio speaker. Audio streams are transmitted in the local area network using multicast packets, and the ES can select any one of them and play it back. A key requirement for the ES is that it must be capable of playing any…

  19. Technical Evaluation Report. 65. Video-Conferencing with Audio Software

    ERIC Educational Resources Information Center

    Baggaley, Jon; Klaas, Jim

    2006-01-01

    An online conference is illustrated using the format of a TV talk show. The conference combined live audio discussion with visual images spontaneously selected by the moderator in the manner of a TV control-room director. A combination of inexpensive online collaborative tools was used for the event, based on the browser-based audio-conferencing…

  20. Making the Most of Audio. Technology in Language Learning Series.

    ERIC Educational Resources Information Center

    Barley, Anthony

    Prepared for practicing language teachers, this book's aim is to help them make the most of audio, a readily accessible resource. The book shows, with the help of numerous practical examples, how a range of language skills can be developed. Most examples are in French. Chapters cover the following information: (1) making the most of audio (e.g.,…

  1. Audio Design: Creating Multi-sensory Images for the Mind.

    ERIC Educational Resources Information Center

    Ferrington, Gary

    1994-01-01

    Explores the concept of "theater of the mind" and discusses design factors in creating audio works that effectively stimulate mental pictures, including: narrative format in audio scripting; qualities of voice; use of concrete language; music; noise versus silence; and the creation of the illusion of space using monaural, stereophonic, and…

  2. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Common audio attention signal. 10.520 Section 10.520 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL MOBILE ALERT SYSTEM... audio attention signal must be restricted to use for Alert Messages under part 10. (e) A device...

  3. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Common audio attention signal. 10.520 Section 10.520 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL MOBILE ALERT SYSTEM... audio attention signal must be restricted to use for Alert Messages under part 10. (e) A device...

  4. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Common audio attention signal. 10.520 Section 10.520 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL WIRELESS EMERGENCY ALERTS... audio attention signal must be restricted to use for Alert Messages under part 10. (e) A device...

  5. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Common audio attention signal. 10.520 Section 10.520 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL WIRELESS EMERGENCY ALERTS... audio attention signal must be restricted to use for Alert Messages under part 10. (e) A device...

  6. The Audio-Visual Equipment Directory. Seventeenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    The following types of audiovisual equipment are catalogued: 8 mm. and 16 mm. motion picture projectors, filmstrip and sound filmstrip projectors, slide projectors, random access projection equipment, opaque, overhead, and micro-projectors, record players, special purpose projection equipment, audio tape recorders and players, audio tape…

  7. The Effect of Audio and Animation in Multimedia Instruction

    ERIC Educational Resources Information Center

    Koroghlanian, Carol; Klein, James D.

    2004-01-01

    This study investigated the effects of audio, animation, and spatial ability in a multimedia computer program for high school biology. Participants completed a multimedia program that presented content by way of text or audio with lean text. In addition, several instructional sequences were presented either with static illustrations or animations.…

  8. Tune in the Net with RealAudio.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1997-01-01

    Describes how to connect to the RealAudio Web site to download a player that provides sound from Web pages to the computer through streaming technology. Explains hardware and software requirements and provides addresses for other RealAudio Web sites are provided, including weather information and current news. (LRW)

  9. Selected Audio-Visual Materials for Consumer Education. [New Version.

    ERIC Educational Resources Information Center

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  10. Recent Audio-Visual Materials on the Soviet Union.

    ERIC Educational Resources Information Center

    Clarke, Edith Campbell

    1981-01-01

    Identifies and describes audio-visual materials (films, filmstrips, and audio cassette tapes) about the Soviet Union which have been produced since 1977. For each entry, information is presented on title, time required, date of release, cost (purchase and rental), and an abstract. (DB)

  11. Layered indexing of home video based on audio signals

    NASA Astrophysics Data System (ADS)

    Ogawa, Tomomi; Aizawa, Kiyoharu

    2003-12-01

    In this paper, we propose a home video indexing using an audio information to detect an event both a rules-based method and a GMM-based method. Although exclusive audio segmentation and classification was usually used, various sounds overlap in practice, in which case an audio in which various sound overlapped is expressed by a labeling layered index. With the rules-based method, low-level audio features are used to determine indexes, which are classied such as speech, silence, music, and EVN(Environment Noise). The GMM-based method which uses the same features as the rule based method also classifies an audio into the four classes. Smoothing is applied in order to determine the index. We show experiments in a few home video data.

  12. Horatio Audio-Describes Shakespeare's "Hamlet": Blind and Low-Vision Theatre-Goers Evaluate an Unconventional Audio Description Strategy

    ERIC Educational Resources Information Center

    Udo, J. P.; Acevedo, B.; Fels, D. I.

    2010-01-01

    Audio description (AD) has been introduced as one solution for providing people who are blind or have low vision with access to live theatre, film and television content. However, there is little research to inform the process, user preferences and presentation style. We present a study of a single live audio-described performance of Hart House…

  13. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    SciTech Connect

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J. . E-mail: pjkeall@vcu.edu

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.

  14. A direct broadcast satellite-audio experiment

    NASA Technical Reports Server (NTRS)

    Vaisnys, Arvydas; Abbe, Brian; Motamedi, Masoud

    1992-01-01

    System studies have been carried out over the past three years at the Jet Propulsion Laboratory (JPL) on digital audio broadcasting (DAB) via satellite. The thrust of the work to date has been on designing power and bandwidth efficient systems capable of providing reliable service to fixed, mobile, and portable radios. It is very difficult to predict performance in an environment which produces random periods of signal blockage, such as encountered in mobile reception where a vehicle can quickly move from one type of terrain to another. For this reason, some signal blockage mitigation techniques were built into an experimental DAB system and a satellite experiment was conducted to obtain both qualitative and quantitative measures of performance in a range of reception environments. This paper presents results from the experiment and some conclusions on the effectiveness of these blockage mitigation techniques.

  15. Robust audio-visual speech recognition under noisy audio-video conditions.

    PubMed

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise. PMID:23757540

  16. Noise-Canceling Helmet Audio System

    NASA Technical Reports Server (NTRS)

    Seibert, Marc A.; Culotta, Anthony J.

    2007-01-01

    A prototype helmet audio system has been developed to improve voice communication for the wearer in a noisy environment. The system was originally intended to be used in a space suit, wherein noise generated by airflow of the spacesuit life-support system can make it difficult for remote listeners to understand the astronaut s speech and can interfere with the astronaut s attempt to issue vocal commands to a voice-controlled robot. The system could be adapted to terrestrial use in helmets of protective suits that are typically worn in noisy settings: examples include biohazard, fire, rescue, and diving suits. The system (see figure) includes an array of microphones and small loudspeakers mounted at fixed positions in a helmet, amplifiers and signal-routing circuitry, and a commercial digital signal processor (DSP). Notwithstanding the fixed positions of the microphones and loudspeakers, the system can accommodate itself to any normal motion of the wearer s head within the helmet. The system operates in conjunction with a radio transceiver. An audio signal arriving via the transceiver intended to be heard by the wearer is adjusted in volume and otherwise conditioned and sent to the loudspeakers. The wearer s speech is collected by the microphones, the outputs of which are logically combined (phased) so as to form a microphone- array directional sensitivity pattern that discriminates in favor of sounds coming from vicinity of the wearer s mouth and against sounds coming from elsewhere. In the DSP, digitized samples of the microphone outputs are processed to filter out airflow noise and to eliminate feedback from the loudspeakers to the microphones. The resulting conditioned version of the wearer s speech signal is sent to the transceiver.

  17. Space Shuttle Orbiter audio subsystem. [to communication and tracking system

    NASA Technical Reports Server (NTRS)

    Stewart, C. H.

    1978-01-01

    The selection of the audio multiplex control configuration for the Space Shuttle Orbiter audio subsystem is discussed and special attention is given to the evaluation criteria of cost, weight and complexity. The specifications and design of the subsystem are described and detail is given to configurations of the audio terminal and audio central control unit (ATU, ACCU). The audio input from the ACCU, at a signal level of -12.2 to 14.8 dBV, nominal range, at 1 kHz, was found to have balanced source impedance and a balanced local impedance of 6000 + or - 600 ohms at 1 kHz, dc isolated. The Lyndon B. Johnson Space Center (JSC) electroacoustic test laboratory, an audio engineering facility consisting of a collection of acoustic test chambers, analyzed problems of speaker and headset performance, multiplexed control data coupled with audio channels, and the Orbiter cabin acoustic effects on the operational performance of voice communications. This system allows technical management and project engineering to address key constraining issues, such as identifying design deficiencies of the headset interface unit and the assessment of the Orbiter cabin performance of voice communications, which affect the subsystem development.

  18. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group. PMID:25324091

  19. Noninvertible watermarking methods for MPEG-encoded audio

    NASA Astrophysics Data System (ADS)

    Qiao, Lintian; Nahrstedt, Klara

    1999-04-01

    Nowadays the multimedia technology in distributed environments becomes realistic and the multimedia copyright protection issue becomes more and more important. Various digital watermarking techniques have been proposed in recent years as the methods to protect the copyright of multimedia data. Although, conceptually, these techniques can be easily extended for protecting digital audio data, it is challenging to apply these techniques to MPEG Audio streams because we need to design the watermarking schemes working directly in the compressed data domain. In this paper, we present watermarking methods which will embed the watermark directly into the MPEG audio bit streams rather than going through expensive decoding/encoding process in order to apply watermarking schemes in uncompressed data domain. Among the two presented schemes, one embeds the watermark into the Scale Factors of the MPEG audio streams and another one embeds the watermark into the MPEG encoded samples. Our experimental results show that both methods perform well and the distortion could be controlled at the minimal level. While we use MPEG Audio Layer II streams in our experimental tests, the proposed schemes can be applied to MPEG Audio Layer I and III. Furthermore, by enforcing creation of the watermark through a standard encryption function such as DES, the proposed schemes will be successful in resolving rightful ownership of watermarked MPEG audio.

  20. A Low-Cost Audio Prescription Labeling System Using RFID for Thai Visually-Impaired People.

    PubMed

    Lertwiriyaprapa, Titipong; Fakkheow, Pirapong

    2015-01-01

    This research aims to develop a low-cost audio prescription labeling (APL) system for visually-impaired people by using the RFID system. The developed APL system includes the APL machine and APL software. The APL machine is for visually-impaired people while APL software allows caregivers to record all important information into the APL machine. The main objective of the development of the APL machine is to reduce costs and size by designing all of the electronic devices to fit into one print circuit board. Also, it is designed so that it is easy to use and can become an electronic aid for daily living. The developed APL software is based on Java and MySQL, both of which can operate on various operating platforms and are easy to develop as commercial software. The developed APL system was first evaluated by 5 experts. The APL system was also evaluated by 50 actual visually-impaired people (30 elders and 20 blind individuals) and 20 caregivers, pharmacists and nurses. After using the APL system, evaluations were carried out, and it can be concluded from the evaluation results that this proposed APL system can be effectively used for helping visually-impaired people in terms of self-medication. PMID:26427743

  1. Musical examination to bridge audio data and sheet music

    NASA Astrophysics Data System (ADS)

    Pan, Xunyu; Cross, Timothy J.; Xiao, Liangliang; Hei, Xiali

    2015-03-01

    The digitalization of audio is commonly implemented for the purpose of convenient storage and transmission of music and songs in today's digital age. Analyzing digital audio for an insightful look at a specific musical characteristic, however, can be quite challenging for various types of applications. Many existing musical analysis techniques can examine a particular piece of audio data. For example, the frequency of digital sound can be easily read and identified at a specific section in an audio file. Based on this information, we could determine the musical note being played at that instant, but what if you want to see a list of all the notes played in a song? While most existing methods help to provide information about a single piece of the audio data at a time, few of them can analyze the available audio file on a larger scale. The research conducted in this work considers how to further utilize the examination of audio data by storing more information from the original audio file. In practice, we develop a novel musical analysis system Musicians Aid to process musical representation and examination of audio data. Musicians Aid solves the previous problem by storing and analyzing the audio information as it reads it rather than tossing it aside. The system can provide professional musicians with an insightful look at the music they created and advance their understanding of their work. Amateur musicians could also benefit from using it solely for the purpose of obtaining feedback about a song they were attempting to play. By comparing our system's interpretation of traditional sheet music with their own playing, a musician could ensure what they played was correct. More specifically, the system could show them exactly where they went wrong and how to adjust their mistakes. In addition, the application could be extended over the Internet to allow users to play music with one another and then review the audio data they produced. This would be particularly useful for teaching music lessons on the web. The developed system is evaluated with songs played with guitar, keyboard, violin, and other popular musical instruments (primarily electronic or stringed instruments). The Musicians Aid system is successful at both representing and analyzing audio data and it is also powerful in assisting individuals interested in learning and understanding music.

  2. Survey of compressed domain audio features and their expressiveness

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Silvia; Vincent, Thomas

    2003-01-01

    We give an overview of existing audio analysis approaches in the compressed domain and incorporate them into a coherent formal structure. After examining the kinds of information accessible in an MPEG-1 compressed audio stream, we describe a coherent approach to determine features from them and report on a number of applications they enable. Most of them aim at creating an index to the audio stream by segmenting the stream into temporally coherent regions, which may be classified into pre-specified types of sounds such as music, speech, speakers, animal sounds, sound effects, or silence. Other applications centre around sound recognition such as gender, beat or speech recognition.

  3. Workout Machine

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Orbotron is a tri-axle exercise machine patterned after a NASA training simulator for astronaut orientation in the microgravity of space. It has three orbiting rings corresponding to roll, pitch and yaw. The user is in the middle of the inner ring with the stomach remaining in the center of all axes, eliminating dizziness. Human power starts the rings spinning, unlike the NASA air-powered system. Marketed by Fantasy Factory (formerly Orbotron, Inc.), the machine can improve aerobic capacity, strength and endurance in five to seven minute workouts.

  4. 37 CFR 201.28 - Statements of Account for digital audio recording devices or media.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... digital audio recording devices or media. 201.28 Section 201.28 Patents, Trademarks, and Copyrights... of Account for digital audio recording devices or media. (a) General. This section prescribes rules... United States any digital audio recording device or digital audio recording medium. (b) Definitions....

  5. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  6. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 3 2014-04-01 2014-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  7. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  8. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 2 2013-04-01 2013-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  9. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 2 2012-04-01 2012-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  10. Communicative Competence in Audio Classrooms: A Position Paper for the CADE 1991 Conference.

    ERIC Educational Resources Information Center

    Burge, Liz

    Classroom practitioners need to move their attention away from the technological and logistical competencies required for audio conferencing (AC) to the required communicative competencies in order to advance their skills in handling the psychodynamics of audio virtual classrooms which include audio alone and audio with graphics. While the…

  11. Wacky Machines

    ERIC Educational Resources Information Center

    Fendrich, Jean

    2002-01-01

    Collectors everywhere know that local antique shops and flea markets are treasure troves just waiting to be plundered. Science teachers might take a hint from these hobbyists, for the next community yard sale might be a repository of old, quirky items that are just the things to get students thinking about simple machines. By introducing some…

  12. Direct broadcast satellite-audio, portable and mobile reception tradeoffs

    NASA Technical Reports Server (NTRS)

    Golshan, Nasser

    1992-01-01

    This paper reports on the findings of a systems tradeoffs study on direct broadcast satellite-radio (DBS-R). Based on emerging advanced subband and transform audio coding systems, four ranges of bit rates: 16-32 kbps, 48-64 kbps, 96-128 kbps and 196-256 kbps are identified for DBS-R. The corresponding grades of audio quality will be subjectively comparable to AM broadcasting, monophonic FM, stereophonic FM, and CD quality audio, respectively. The satellite EIRP's needed for mobile DBS-R reception in suburban areas are sufficient for portable reception in most single family houses when allowance is made for the higher G/T of portable table-top receivers. As an example, the variation of the space segment cost as a function of frequency, audio quality, coverage capacity, and beam size is explored for a typical DBS-R system.

  13. Audio CAPTCHA for SIP-Based VoIP

    NASA Astrophysics Data System (ADS)

    Soupionis, Yannis; Tountas, George; Gritzalis, Dimitris

    Voice over IP (VoIP) introduces new ways of communication, while utilizing existing data networks to provide inexpensive voice communications worldwide as a promising alternative to the traditional PSTN telephony. SPam over Internet Telephony (SPIT) is one potential source of future annoyance in VoIP. A common way to launch a SPIT attack is the use of an automated procedure (bot), which generates calls and produces audio advertisements. In this paper, our goal is to design appropriate CAPTCHA to fight such bots. We focus on and develop audio CAPTCHA, as the audio format is more suitable for VoIP environments and we implement it in a SIP-based VoIP environment. Furthermore, we suggest and evaluate the specific attributes that audio CAPTCHA should incorporate in order to be effective, and test it against an open source bot implementation.

  14. Audio and Video Cassettes; Friend or Foe of the Librarian?

    ERIC Educational Resources Information Center

    Poulos, Arthur

    1972-01-01

    Audio and video tape cassettes pose some special problems for the librarian. A better understanding of what these products can -- and cannot -- do will help the librarian make optimum use of the new formats. (Author/NH)

  15. Content-based retrieval of music and audio

    NASA Astrophysics Data System (ADS)

    Foote, Jonathan T.

    1997-10-01

    Though many systems exist for content-based retrieval of images, little work has been done on the audio portion of the multimedia stream. This paper presents a system to retrieve audio documents y acoustic similarity. The similarity measure is based on statistics derived from a supervised vector quantizer, rather than matching simple pitch or spectral characteristics. The system is thus able to learn distinguishing audio features while ignoring unimportant variation. Both theoretical and experimental results are presented, including quantitative measures of retrieval performance. Retrieval was tested on a corpus of simple sounds as well as a corpus of musical excerpts. The system is purely data-driven and does not depend on particular audio characteristics. Given a suitable parameterization, this method may thus be applicable to image retrieval as well.

  16. Robustness evaluation of transactional audio watermarking systems

    NASA Astrophysics Data System (ADS)

    Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg

    2003-06-01

    Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.

  17. Video and audio data integration for conferencing

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Hinds, Raynard O.

    1995-04-01

    In videoconferencing applications the perceived quality of the video signal is affected by the presence of an audio signal (speech). To achieve high compression rates, video coders must compromise image quality in terms of spatial resolution, grayscale resolution, and frame rate, and may introduce various kinds of artifact.s We consider tradeoffs in grayscale resolution and frame rate, and use subjective evaluations to assess the perceived quality of the video signal in the presence of speech. In particular we explore the importance of lip synchronization. In our experiment we used an original grayscale sequence at QCIF resolution, 30 frames/second, and 256 gray levels. We compared the 256-level sequence at different frame rates with a two-level version of the sequence at 30 frames/sec. The viewing distance was 20 image heights, or roughly two feet from an SGI workstation. We used uncoded speech. To obtain the two-level sequence we used an adaptive clustering algorithm for segmentation of video sequences. The binary sketches it creates move smoothly and preserve the main characteristics of the face, so that it is easily recognizable. More importantly, the rendering of lip and eye movements is very accurate. The test results indicate that when the frame rate of the full grayscale sequence is low (less than 5 frames/sec), most observers prefer the two-level sequence.

  18. Personal audio with a planar bright zone.

    PubMed

    Coleman, Philip; Jackson, Philip J B; Olik, Marek; Pedersen, Jan Abildgaard

    2014-10-01

    Reproduction of multiple sound zones, in which personal audio programs may be consumed without the need for headphones, is an active topic in acoustical signal processing. Many approaches to sound zone reproduction do not consider control of the bright zone phase, which may lead to self-cancellation problems if the loudspeakers surround the zones. Conversely, control of the phase in a least-squares sense comes at a cost of decreased level difference between the zones and frequency range of cancellation. Single-zone approaches have considered plane wave reproduction by focusing the sound energy in to a point in the wavenumber domain. In this article, a planar bright zone is reproduced via planarity control, which constrains the bright zone energy to impinge from a narrow range of angles via projection in to a spatial domain. Simulation results using a circular array surrounding two zones show the method to produce superior contrast to the least-squares approach, and superior planarity to the contrast maximization approach. Practical performance measurements obtained in an acoustically treated room verify the conclusions drawn under free-field conditions. PMID:25324075

  19. Tubes and transistors in audio amplifiers

    NASA Astrophysics Data System (ADS)

    Reshetnikov, O. M.; Khestanov, R. K.; Chernykh, Y. V.

    1985-03-01

    The alleged differences between tube and transistor high-fidelity sound reproduction channels, in terms of subjective versus objective evaluation of reception quality was studied. For testing the preamplifier stage behind the phonograph pickup was singled out, the Audio Research SP-6C tube preamplifier being compared with a specially built RIAA transistor preamplifier-corrector, so as to separate interaction of this stage from interaction of the output power stage with the acoustic part of the system. Tests were performed over a period of two months by the blind method, each test being performed twice using an A/B/X comparator: first in position A (tube) and B (transistor) only only and then also in position X for picking up randomized sound equiprobably identical to A sound and B sound, if those appeared different. According to expert listeners, there is no objective difference between tube and transistor preamplifier stages in terms of sound reception. The apparent difference was detected in only two tests not subsequently reproducible.

  20. Virtual environment interaction through 3D audio by blind children.

    PubMed

    Sánchez, J; Lumbreras, M

    1999-01-01

    Interactive software is actively used for learning, cognition, and entertainment purposes. Educational entertainment software is not very popular among blind children because most computer games and electronic toys have interfaces that are only accessible through visual cues. This work applies the concept of interactive hyperstories to blind children. Hyperstories are implemented in a 3D acoustic virtual world. In past studies we have conceptualized a model to design hyperstories. This study illustrates the feasibility of the model. It also provides an introduction to researchers to the field of entertainment software for blind children. As a result, we have designed and field tested AudioDoom, a virtual environment interacted through 3D Audio by blind children. AudioDoom is also a software that enables testing nontrivial interfaces and cognitive tasks with blind children. We explored the construction of cognitive spatial structures in the minds of blind children through audio-based entertainment and spatial sound navigable experiences. Children playing AudioDoom were exposed to first person experiences by exploring highly interactive virtual worlds through the use of 3D aural representations of the space. This experience was structured in several cognitive tasks where they had to build concrete models of their spatial representations constructed through the interaction with AudioDoom by using Legotrade mark blocks. We analyze our preliminary results after testing AudioDoom with Chilean children from a school for blind children. We discuss issues such as interactivity in software without visual cues, the representation of spatial sound navigable experiences, and entertainment software such as computer games for blind children. We also evaluate the feasibility to construct virtual environments through the design of dynamic learning materials with audio cues. PMID:19178246

  1. The power of digital audio in interactive instruction: An unexploited medium

    SciTech Connect

    Pratt, J.; Trainor, M.

    1989-01-01

    Widespread use of audio in computer-based training (CBT) occurred with the advent of the interactive videodisc technology. This paper discusses the alternative of digital audio, which, unlike videodisc audio, enables one to rapidly revise the audio used in the CBT and which may be used in nonvideo CBT applications as well. We also discuss techniques used in audio script writing, editing, and production. Results from evaluations indicate a high degree of user satisfaction. 4 refs.

  2. Joint application of audio spectral envelope and tonality index in an e-asthma monitoring system.

    PubMed

    Wi?niewski, Marcin; Zieli?ski, Tomasz P

    2015-05-01

    This paper presents in detail a recently introduced highly efficient method for automatic detection of asthmatic wheezing in breathing sounds. The fluctuation in the audio spectral envelope (ASE) from the MPEG-7 standard and the value of the tonality index (TI) from the MPEG-2 Audio specification are jointly used as discriminative features for wheezy sounds, while the support vector machine (SVM) with a polynomial kernel serves as a classifier. The advantages of the proposed approach are described in the paper (e.g., detecting weak wheezes, very good ROC characteristics, independence from noise color). Since the method is not computationally complex, it is suitable for remote asthma monitoring using mobile devices (personal medical assistants). The main contribution of this paper consists of presenting all the implementation details concerning the proposed approach for the first time, i.e., the pseudocode of the method and adjusting the values of the ASE and TI parameters after which only one (not two) FFT is required for analysis of a next overlapping signal fragment. The efficiency of the method has also been additionally confirmed by the AdaBoost classifier with a built-in mechanism to feature ranking, as well as a previously performed minimal-redundancy-maximal-relevance test. PMID:25167561

  3. Charging machine

    DOEpatents

    Medlin, John B.

    1976-05-25

    A charging machine for loading fuel slugs into the process tubes of a nuclear reactor includes a tubular housing connected to the process tube, a charging trough connected to the other end of the tubular housing, a device for loading the charging trough with a group of fuel slugs, means for equalizing the coolant pressure in the charging trough with the pressure in the process tubes, means for pushing the group of fuel slugs into the process tube and a latch and a seal engaging the last object in the group of fuel slugs to prevent the fuel slugs from being ejected from the process tube when the pusher is removed and to prevent pressure liquid from entering the charging machine.

  4. Fullerene Machines

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash (Technical Monitor)

    1998-01-01

    Fullerenes possess remarkable properties and many investigators have examined the mechanical, electronic and other characteristics of carbon SP2 systems in some detail. In addition, C-60 can be functionalized with many classes of molecular fragments and we may expect the caps of carbon nanotubes to have a similar chemistry. Finally, carbon nanotubes have been attached to t he end of scanning probe microscope (Spill) tips. Spills can be manipulated with sub-angstrom accuracy. Together, these investigations suggest that complex molecular machines made of fullerenes may someday be created and manipulated with very high accuracy. We have studied some such systems computationally (primarily functionalized carbon nanotube gears and computer components). If such machines can be combined appropriately, a class of materials may be created that can sense their environment, calculate a response, and act. The implications of such hypothetical materials are substantial.

  5. Fullerene Machines

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash

    1998-01-01

    Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically accessible and of great interest. We have computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Preliminary results suggest that these gears can be cooled by a helium atmosphere and a laser motor can power fullerene gears if a positive and negative charge have been added to form a dipole. In addition, we have unproven concepts based on experimental and computational evidence for support structures, computer control, a system architecture, a variety of components, and manufacture. Combining fullerene machines with the remarkable mechanical properties of carbon nanotubes, there is some reason to believe that a focused effort to develop fullerene nanotechnology could yield materials with tremendous properties.

  6. Induction machine

    DOEpatents

    Owen, Whitney H.

    1980-01-01

    A polyphase rotary induction machine for use as a motor or generator utilizing a single rotor assembly having two series connected sets of rotor windings, a first stator winding disposed around the first rotor winding and means for controlling the current induced in one set of the rotor windings compared to the current induced in the other set of the rotor windings. The rotor windings may be wound rotor windings or squirrel cage windings.

  7. Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.

    2010-01-01

    From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.

  8. Digital Audio Radio Broadcast Systems Laboratory Testing Nearly Complete

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Radio history continues to be made at the NASA Lewis Research Center with the completion of phase one of the digital audio radio (DAR) testing conducted by the Consumer Electronics Group of the Electronic Industries Association. This satellite, satellite/terrestrial, and terrestrial digital technology will open up new audio broadcasting opportunities both domestically and worldwide. It will significantly improve the current quality of amplitude-modulated/frequency-modulated (AM/FM) radio with a new digitally modulated radio signal and will introduce true compact-disc-quality (CD-quality) sound for the first time. Lewis is hosting the laboratory testing of seven proposed digital audio radio systems and modes. Two of the proposed systems operate in two modes each, making a total of nine systems being tested. The nine systems are divided into the following types of transmission: in-band on-channel (IBOC), in-band adjacent-channel (IBAC), and new bands. The laboratory testing was conducted by the Consumer Electronics Group of the Electronic Industries Association. Subjective assessments of the audio recordings for each of the nine systems was conducted by the Communications Research Center in Ottawa, Canada, under contract to the Electronic Industries Association. The Communications Research Center has the only CCIR-qualified (Consultative Committee for International Radio) audio testing facility in North America. The main goals of the U.S. testing process are to (1) provide technical data to the Federal Communication Commission (FCC) so that it can establish a standard for digital audio receivers and transmitters and (2) provide the receiver and transmitter industries with the proper standards upon which to build their equipment. In addition, the data will be forwarded to the International Telecommunications Union to help in the establishment of international standards for digital audio receivers and transmitters, thus allowing U.S. manufacturers to compete in the world market.

  9. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gerg?; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Kincses, Zsigmond Tamás

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. PMID:26165152

  10. Audio-guided audiovisual data segmentation, indexing, and retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-12-01

    While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.

  11. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  12. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  13. TEMPO machine

    SciTech Connect

    Rohwein, G.J.; Lancaster, K.T.; Lawson, R.N.

    1986-06-01

    TEMPO is a transformer powered megavolt pulse generator with an output pulse of 100 ns duration. The machine was designed for burst mode operation at pulse repetition rates up to 10 Hz with minimum pulse-to-pulse voltage variations. To meet the requirement for pulse duration a nd a 20-..omega.. output impedance within reasonable size constraints, the pulse forming transmission line was designed as two parallel water-insulated, strip-type Blumleins. Stray capacitance and electric fields along the edges of the line elements were controlled by lining the tank with plastic sheet.

  14. Audio-visual event detection based on mining of semantic audio-visual labels

    NASA Astrophysics Data System (ADS)

    Goh, King-Shy; Miyahara, Koji; Radhakrishnan, Regunathan; Xiong, Ziyou; Divakaran, Ajay

    2003-12-01

    Removing commercials from television programs is a much sought-after feature for a personal video recorder. In this paper, we employ an unsupervised clustering scheme (CM_Detect) to detect commercials in television programs. Each program is first divided into W8-minute chunks, and we extract audio and visual features from each of these chunks. Next, we apply k-means clustering to assign each chunk with a commercial/program label. In contrast to other methods, we do not make any assumptions regarding the program content. Thus, our method is highly content-adaptive and computationally inexpensive. Through empirical studies on various content, including American news, Japanese news, and sports programs, we demonstrate that our method is able to filter out most of the commercials without falsely removing the regular program.

  15. Tunneling machine

    SciTech Connect

    Snyder, L.L.

    1980-02-19

    A diametrically compact tunneling machine for boring tunnels is disclosed. The machine includes a tubular support frame having a hollow piston mounted therein which is movable from a retracted position in the support frame to an extended position. A drive shaft is rotatably mounted in the hollow piston and carries a cutter head at one end. The hollow piston is restrained against rotational movement relative to the support frame and the drive shaft is constrained against longitudinal movement relative to the hollow piston. A plurality of radially extendible feet project from the support frame to the tunnel wall to grip the tunnel wall during a tunneling operation wherein the hollow piston is driven forwardly so that the cutter head works on the tunnel face. When the hollow piston is fully extended, a plurality of extendible support feet, which are fixed to the rearward and forward ends of the hollow piston, are extended, the radially extendible feet are retracted and the support frame is shifted forwardly by the piston so that a further tunneling operation may be initiated.

  16. Multi-channel spatialization system for audio signals

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor)

    1995-01-01

    Synthetic head related transfer functions (HRTF's) for imposing reprogramable spatial cues to a plurality of audio input signals included, for example, in multiple narrow-band audio communications signals received simultaneously are generated and stored in interchangeable programmable read only memories (PROM's) which store both head related transfer function impulse response data and source positional information for a plurality of desired virtual source locations. The analog inputs of the audio signals are filtered and converted to digital signals from which synthetic head related transfer functions are generated in the form of linear phase finite impulse response filters. The outputs of the impulse response filters are subsequently reconverted to analog signals, filtered, mixed and fed to a pair of headphones.

  17. Multi-channel spatialization systems for audio signals

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor)

    1993-01-01

    Synthetic head related transfer functions (HRTF's) for imposing reprogrammable spatial cues to a plurality of audio input signals included, for example, in multiple narrow-band audio communications signals received simultaneously are generated and stored in interchangeable programmable read only memories (PROM's) which store both head related transfer function impulse response data and source positional information for a plurality of desired virtual source locations. The analog inputs of the audio signals are filtered and converted to digital signals from which synthetic head related transfer functions are generated in the form of linear phase finite impulse response filters. The outputs of the impulse response filters are subsequently reconverted to analog signals, filtered, mixed, and fed to a pair of headphones.

  18. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  19. Note-accurate audio segmentation based on MPEG-7

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens

    2003-12-01

    Segmenting audio data into the smallest musical components is the basis for many further meta data extraction algorithms. For example, an automatic music transcription system needs to know where the exact boundaries of each tone are. In this paper a note accurate audio segmentation algorithm based on MPEG-7 low level descriptors is introduced. For a reliable detection of different notes, both features in the time and the frequency domain are used. Because of this, polyphonic instrument mixes and even melodies characterized by human voices can be examined with this alogrithm. For testing and verification of the note accurate segmentation, a simple music transcription system was implemented. The dominant frequency within each segment is used to build a MIDI file representing the processed audio data.

  20. Audio-visual active speaker tracking in cluttered indoors environments.

    PubMed

    Talantzis, Fotios; Pnevmatikakis, Aristodemos; Constantinides, Anthony G

    2008-06-01

    We propose a system for detecting the active speaker in cluttered and reverberant environments where more than one person speaks and moves. Rather than using only audio information, the system utilizes audiovisual information from multiple acoustic and video sensors that feed separate audio and video tracking modules. The audio module operates using a particle filter (PF) and an information-theoretic framework to provide accurate acoustic source location under reverberant conditions. The video subsystem combines in 3-D a number of 2-D trackers based on a variation of Stauffer's adaptive background algorithm with spatiotemporal adaptation of the learning parameters and a Kalman tracker in a feedback configuration. Extensive experiments show that gains are to be expected when fusion of the separate modalities is performed to detect the active speaker. PMID:18558543

  1. Audio-visual active speaker tracking in cluttered indoors environments.

    PubMed

    Talantzis, Fotios; Pnevmatikakis, Aristodemos; Constantinides, Anthony G

    2009-02-01

    We propose a system for detecting the active speaker in cluttered and reverberant environments where more than one person speaks and moves. Rather than using only audio information, the system utilizes audiovisual information from multiple acoustic and video sensors that feed separate audio and video tracking modules. The audio module operates using a particle filter (PF) and an information-theoretic framework to provide accurate acoustic source location under reverberant conditions. The video subsystem combines in 3-D a number of 2-D trackers based on a variation of Stauffer's adaptive background algorithm with spatiotemporal adaptation of the learning parameters and a Kalman tracker in a feedback configuration. Extensive experiments show that gains are to be expected when fusion of the separate modalities is performed to detect the active speaker. PMID:19150757

  2. Music Identification System Using MPEG-7 Audio Signature Descriptors

    PubMed Central

    You, Shingchern D.; Chen, Wei-Hwa; Chen, Woei-Kae

    2013-01-01

    This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query) audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system's database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control. PMID:23533359

  3. Say What? The Role of Audio in Multimedia Video

    NASA Astrophysics Data System (ADS)

    Linder, C. A.; Holmes, R. M.

    2011-12-01

    Audio, including interviews, ambient sounds, and music, is a critical-yet often overlooked-part of an effective multimedia video. In February 2010, Linder joined scientists working on the Global Rivers Observatory Project for two weeks of intensive fieldwork in the Congo River watershed. The team's goal was to learn more about how climate change and deforestation are impacting the river system and coastal ocean. Using stills and video shot with a lightweight digital SLR outfit and audio recorded with a pocket-sized sound recorder, Linder documented the trials and triumphs of working in the heart of Africa. Using excerpts from the six-minute Congo multimedia video, this presentation will illustrate how to record and edit an engaging audio track. Topics include interview technique, collecting ambient sounds, choosing and using music, and editing it all together to educate and entertain the viewer.

  4. Audio signal recognition for speech, music, and environmental sounds

    NASA Astrophysics Data System (ADS)

    Ellis, Daniel P. W.

    2003-10-01

    Human listeners are very good at all kinds of sound detection and identification tasks, from understanding heavily accented speech to noticing a ringing phone underneath music playing at full blast. Efforts to duplicate these abilities on computer have been particularly intense in the area of speech recognition, and it is instructive to review which approaches have proved most powerful, and which major problems still remain. The features and models developed for speech have found applications in other audio recognition tasks, including musical signal analysis, and the problems of analyzing the general ``ambient'' audio that might be encountered by an auditorily endowed robot. This talk will briefly review statistical pattern recognition for audio signals, giving examples in several of these domains. Particular emphasis will be given to common aspects and lessons learned.

  5. Highlight summarization in golf videos using audio signals

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Kim, Jin Young

    2008-01-01

    In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.

  6. Three dimensional audio versus head down TCAS displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Pittman, Marc T.

    1994-01-01

    The advantage of a head up auditory display was evaluated in an experiment designed to measure and compare the acquisition time for capturing visual targets under two conditions: Standard head down traffic collision avoidance system (TCAS) display, and three-dimensional (3-D) audio TCAS presentation. Ten commercial airline crews were tested under full mission simulation conditions at the NASA Ames Crew-Vehicle Systems Research Facility Advanced Concepts Flight Simulator. Scenario software generated targets corresponding to aircraft which activated a 3-D aural advisory or a TCAS advisory. Results showed a significant difference in target acquisition time between the two conditions, favoring the 3-D audio TCAS condition by 500 ms.

  7. Influence of audio triggered emotional attention on video perception

    NASA Astrophysics Data System (ADS)

    Torres, Freddy; Kalva, Hari

    2014-02-01

    Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.

  8. The representation of line graphs through audio-images

    SciTech Connect

    Mansur, D.L.; Blattner, M.M.; Joy, K.I.

    1984-09-25

    Sound-graphs, or graphs in sound, provide an alternative method for forming a holistic image of numerical data, specifically, line graphs. A prototype sound system was constructed to form an audio-image of line graphs with time plotted against pitch as the coordinate system. Software tools to manipulate the audio-image and allow individual exploration of the sound-graph are described. Human factors studies were conducted on the important features of graph characteristics in the sound-graph system as well as on tactile graphs.

  9. Video-assisted segmentation of speech and audio track

    NASA Astrophysics Data System (ADS)

    Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.

    1999-08-01

    Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.

  10. Evaluation of robustness and transparency of multiple audio watermark embedding

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Zmudzinski, Sascha

    2008-02-01

    As digital watermarking becomes an accepted and widely applied technology, a number of concerns regarding its reliability in typical application scenarios come up. One important and often discussed question is the robustness of digital watermarks against multiple embedding. This means that one cover is marked several times by various users with by same watermarking algorithm but with different keys and different watermark messages. In our paper we discuss the behavior of our PCM audio watermarking algorithm when applying multiple watermark embedding. This includes evaluation of robustness and transparency. Test results for multiple hours of audio content ranging from spoken words to music are provided.

  11. MedlinePlus FAQ: Is audio description available for videos on MedlinePlus?

    MedlinePLUS

    ... nih.gov/medlineplus/faq/audiodescription.html Question: Is audio description available for videos on MedlinePlus? To use ... features on this page, please enable JavaScript. Answer: Audio description of videos helps make the content of ...

  12. Audio-Described Educational Materials: Ugandan Teachers' Experiences

    ERIC Educational Resources Information Center

    Wormnaes, Siri; Sellaeg, Nina

    2013-01-01

    This article describes and discusses a qualitative, descriptive, and exploratory study of how 12 visually impaired teachers in Uganda experienced audio-described educational video material for teachers and student teachers. The study is based upon interviews with these teachers and observations while they were using the material either…

  13. Exploratory Evaluation of Audio Email Technology in Formative Assessment Feedback

    ERIC Educational Resources Information Center

    Macgregor, George; Spiers, Alex; Taylor, Chris

    2011-01-01

    Formative assessment generates feedback on students' performance, thereby accelerating and improving student learning. Anecdotal evidence gathered by a number of evaluations has hypothesised that audio feedback may be capable of enhancing student learning more than other approaches. In this paper we report on the preliminary findings of a…

  14. Integrated Spacesuit Audio System Enhances Speech Quality and Reduces Noise

    NASA Technical Reports Server (NTRS)

    Huang, Yiteng Arden; Chen, Jingdong; Chen, Shaoyan Sharyl

    2009-01-01

    A new approach has been proposed for increasing astronaut comfort and speech capture. Currently, the special design of a spacesuit forms an extreme acoustic environment making it difficult to capture clear speech without compromising comfort. The proposed Integrated Spacesuit Audio (ISA) system is to incorporate the microphones into the helmet and use software to extract voice signals from background noise.

  15. Developing a Framework for Effective Audio Feedback: A Case Study

    ERIC Educational Resources Information Center

    Hennessy, Claire; Forrester, Gillian

    2014-01-01

    The increase in the use of technology-enhanced learning in higher education has included a growing interest in new approaches to enhance the quality of feedback given to students. Audio feedback is one method that has become more popular, yet evaluating its role in feedback delivery is still an emerging area for research. This paper is based on a…

  16. Digital Audio Broadcasting in the Short Wave Bands

    NASA Technical Reports Server (NTRS)

    Vaisnys, Arvydas

    1998-01-01

    For many decades the Short Wae broadcasting service has used high power, double-sideband AM signals to reach audiences far and wide. While audio quality was usually not very high, inexpensive receivers could be used to tune into broadcasts fro distant countries.

  17. Audio and Video Reflections to Promote Social Justice

    ERIC Educational Resources Information Center

    Boske, Christa

    2011-01-01

    Purpose: The purpose of this paper is to examine how 15 graduate students enrolled in a US school leadership preparation program understand issues of social justice and equity through a reflective process utilizing audio and/or video software. Design/methodology/approach: The study is based on the tradition of grounded theory. The researcher…

  18. Audio-Visual Training in Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Magnan, Annie; Ecalle, Jean

    2006-01-01

    This study tested the effectiveness of audio-visual training in the discrimination of the phonetic feature of voicing on the recognition of written words by young children deemed to at risk of dyslexia (experiment 1) as well as on dyslexic children's phonological skills (experiment 2). In addition, the third experiment studied the effectiveness of…

  19. Survey of the State of Audio Collections in Academic Libraries

    ERIC Educational Resources Information Center

    Smith, Abby; Allen, David Randal; Allen, Karen

    2004-01-01

    The goal of this survey was to collect and analyze baseline information about the status of audio collections held by a set of research institutions. This information can help shape the national preservation plan now being developed by the National Recording Preservation Board (NRPB) and the Library of Congress to preserve "sound recordings that…

  20. Sounds in CD-ROM--Integrating Audio in Multimedia Products.

    ERIC Educational Resources Information Center

    Rosebush, Judson

    1992-01-01

    Describes how audio technology is being integrated into CD-ROMs to create multimedia products. Computer hardware and software are discussed, including the use of HyperCard to combine still pictures, moving video pictures, and sound; and specific new multimedia products produced by the Voyager Company are described. (LRW)

  1. The 7 Habits of Highly Effective Families Audio System. [Audiotapes].

    ERIC Educational Resources Information Center

    Covey, Stephen R.

    Intended to help families build rewarding relationships, this set of audio tapes communicates the importance of family, shows the role of leadership in creating "beautiful family culture," and provides a frame of reference in which to solve problems. The first of the four tapes explores the concept of principles as the foundation of family, and…

  2. Multi-pose lipreading and audio-visual speech recognition

    NASA Astrophysics Data System (ADS)

    Estellers, Virginia; Thiran, Jean-Philippe

    2012-12-01

    In this article, we study the adaptation of visual and audio-visual speech recognition systems to non-ideal visual conditions. We focus on overcoming the effects of a changing pose of the speaker, a problem encountered in natural situations where the speaker moves freely and does not keep a frontal pose with relation to the camera. To handle these situations, we introduce a pose normalization block in a standard system and generate virtual frontal views from non-frontal images. The proposed method is inspired by pose-invariant face recognition and relies on linear regression to find an approximate mapping between images from different poses. We integrate the proposed pose normalization block at different stages of the speech recognition system and quantify the loss of performance related to pose changes and pose normalization techniques. In audio-visual experiments we also analyze the integration of the audio and visual streams. We show that an audio-visual system should account for non-frontal poses and normalization techniques in terms of the weight assigned to the visual stream in the classifier.

  3. Adding Audio Description: Does It Make a Difference?

    ERIC Educational Resources Information Center

    Schmeidler, Emilie; Kirchner, Corinne

    2001-01-01

    A study involving 111 adults with blindness examined the impact of watching television science programs with and without audio description. Results indicate respondents gained and retained more information from watching programs with description. They reported that the description makes the program more enjoyable, interesting, and informative.…

  4. Audio-Described Educational Materials: Ugandan Teachers' Experiences

    ERIC Educational Resources Information Center

    Wormnaes, Siri; Sellaeg, Nina

    2013-01-01

    This article describes and discusses a qualitative, descriptive, and exploratory study of how 12 visually impaired teachers in Uganda experienced audio-described educational video material for teachers and student teachers. The study is based upon interviews with these teachers and observations while they were using the material either…

  5. The Role of Audio Media in the Lives of Children.

    ERIC Educational Resources Information Center

    Christenson, Peter G.; Lindlof, Thomas R.

    Mass communication researchers have largely ignored the role of audio media and popular music in the lives of children, yet the available evidence shows that children do listen. Extant studies yield a consistent developmental portrait of childrens' listening frequency, but there is a notable lack of programatic research over the past decade, one…

  6. Evaluation of an Audio Cassette Tape Lecture Course

    ERIC Educational Resources Information Center

    Blank, Jerome W.

    1975-01-01

    An audio-cassette continuing education course (Selected Topics in Pharmacology) from Extension Services in Pharmacy at the University of Wisconsin was offered to a selected test market of pharmacists and evaluated using a pre-, post-test design. Results showed significant increase in cognitive knowledge and strong approval of students. (JT)

  7. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  8. Audio-visual perception system for a humanoid robotic head.

    PubMed

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  9. Effectiveness of Audio on Screen Captures in Software Application Instruction

    ERIC Educational Resources Information Center

    Veronikas, Susan Walsh; Maushak, Nancy

    2005-01-01

    Presentation of software instruction has been supported by manuals and textbooks consisting of screen captures, but a multimedia approach may increase learning outcomes. This study investigated the effects of modality (text, audio, or dual) on the achievement and attitudes of college students learning a software application through the computer.…

  10. The Audio-Visual Marketing Handbook for Independent Schools.

    ERIC Educational Resources Information Center

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  11. 47 CFR 87.483 - Audio visual warning systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... audio visual warning system (AVWS) is a radar-based obstacle avoidance system. AVWS activates... exist. The continuously operating radar calculates the location, direction and groundspeed of nearby... obstacle. (a) Radiodetermination (radar) frequencies. Frequencies authorized under § 87.475(b)(8) of...

  12. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    ERIC Educational Resources Information Center

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  13. Audio-Visual Communications, A Tool for the Professional

    ERIC Educational Resources Information Center

    Journal of Environmental Health, 1976

    1976-01-01

    The manner in which the Cuyahoga County, Ohio Department of Environmental Health utilizes audio-visual presentations for communication with business and industry, professional public health agencies and the general public is presented. Subjects including food sanitation, radiation protection and safety are described. (BT)

  14. Sounds Good: Using Digital Audio for Evaluation Feedback

    ERIC Educational Resources Information Center

    Rotheram, Bob

    2009-01-01

    Feedback on student work is problematic for faculty and students in British higher education. Evaluation feedback takes faculty much time to produce and students are often dissatisfied with its quantity, timing, and clarity. The Sounds Good project has been experimenting with the use of digital audio for feedback, aiming to save faculty time and…

  15. An Evaluation of the Audio Workbook System. R & D Report.

    ERIC Educational Resources Information Center

    Andrulis, Richard S.

    The Cassette Review Program (CRP), developed by The American College of Life Underwriters, is organized into 10 sections corresponding to the 10 courses of the American College C.L.U. diploma program. It includes both audio tapes and notebooks. The formative evaluation of the CRP carried out in 1971 resulted in a restructuring of both the tapes…

  16. 78 FR 38093 - Seventh Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-25

    ... Federal Aviation Administration Seventh Meeting: RTCA Special Committee 226, Audio Systems and Equipment... Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of the sixth meeting of the RTCA Special Committee 226, Audio Systems...

  17. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress...

  18. 78 FR 18416 - Sixth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... Federal Aviation Administration Sixth Meeting: RTCA Special Committee 226, Audio Systems and Equipment... Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of the sixth meeting of the RTCA Special Committee 226, Audio Systems...

  19. 77 FR 37733 - Third Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... Federal Aviation Administration Third Meeting: RTCA Special Committee 226, Audio Systems and Equipment... Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of the third meeting of RTCA Special Committee 226, Audio Systems...

  20. Design and Usability Testing of an Audio Platform Game for Players with Visual Impairments

    ERIC Educational Resources Information Center

    Oren, Michael; Harding, Chris; Bonebright, Terri L.

    2008-01-01

    This article reports on the evaluation of a novel audio platform game that creates a spatial, interactive experience via audio cues. A pilot study with players with visual impairments, and usability testing comparing the visual and audio game versions using both sighted players and players with visual impairments, revealed that all the…

  1. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio... and in graphics so that it is easily legible. If the advertisement has an audio component, the...

  2. Audio Use in E-Learning: What, Why, When, and How?

    ERIC Educational Resources Information Center

    Calandra, Brendan; Barron, Ann E.; Thompson-Sellers, Ingrid

    2008-01-01

    Decisions related to the implementation of audio in e-learning are perplexing for many instructional designers, and deciphering theory and principles related to audio use can be difficult for practitioners. Yet, as bandwidth on the Internet increases, digital audio is becoming more common in online courses. This article provides a review of…

  3. 77 FR 58209 - Fourth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ... Federal Aviation Administration Fourth Meeting: RTCA Special Committee 226, Audio Systems and Equipment... notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of the fourth meeting of the RTCA Special Committee 226, Audio Systems...

  4. The Use of Asynchronous Audio Feedback with Online RN-BSN Students

    ERIC Educational Resources Information Center

    London, Julie E.

    2013-01-01

    The use of audio technology by online nursing educators is a recent phenomenon. Research has been conducted in the area of audio technology in different domains and populations, but very few researchers have focused on nursing. Preliminary results have indicated that using audio in place of text can increase student cognition and socialization.…

  5. 76 FR 79755 - First Meeting: RTCA Special Committee 226 Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-22

    ... Federal Aviation Administration First Meeting: RTCA Special Committee 226 Audio Systems and Equipment... RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 226, Audio Systems and Equipment, for the...

  6. 78 FR 57673 - Eighth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... Federal Aviation Administration Eighth Meeting: RTCA Special Committee 226, Audio Systems and Equipment... Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of the eighth meeting of the RTCA Special Committee 226, Audio Systems...

  7. 47 CFR 73.9005 - Compliance requirements for covered demodulator products: Audio.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... products: Audio. 73.9005 Section 73.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED....9005 Compliance requirements for covered demodulator products: Audio. Except as otherwise provided in §§ 73.9003(a) or 73.9004(a), covered demodulator products shall not output the audio portions...

  8. 37 CFR 201.27 - Initial notice of distribution of digital audio recording devices or media.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distribution of digital audio recording devices or media. 201.27 Section 201.27 Patents, Trademarks, and... Initial notice of distribution of digital audio recording devices or media. (a) General. This section... required by section 1003(b) of the Audio Home Recording Act of 1992, Public Law 102-563, title 17 of...

  9. Hearing You Loud and Clear: Student Perspectives of Audio Feedback in Higher Education

    ERIC Educational Resources Information Center

    Gould, Jill; Day, Pat

    2013-01-01

    The use of audio feedback for students in a full-time community nursing degree course is appraised. The aim of this mixed methods study was to examine student views on audio feedback for written assignments. Questionnaires and a focus group were used to capture student opinion of this pilot project. The majority of students valued audio feedback…

  10. 77 FR 37732 - Fourteenth Meeting: RTCA Special Committee 224, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... Federal Aviation Administration Fourteenth Meeting: RTCA Special Committee 224, Audio Systems and...: Meeting Notice of RTCA Special Committee 224, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of the fourteenth meeting of RTCA Special Committee 224, Audio...

  11. 77 FR 16890 - Second Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Federal Aviation Administration Second Meeting: RTCA Special Committee 226, Audio Systems and Equipment... meeting RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY: The FAA is issuing this notice to advise the public of the second meeting of RTCA Special Committee 226, Audio Systems and...

  12. Responding Effectively to Composition Students: Comparing Student Perceptions of Written and Audio Feedback

    ERIC Educational Resources Information Center

    Bilbro, J.; Iluzada, C.; Clark, D. E.

    2013-01-01

    The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…

  13. Hearing You Loud and Clear: Student Perspectives of Audio Feedback in Higher Education

    ERIC Educational Resources Information Center

    Gould, Jill; Day, Pat

    2013-01-01

    The use of audio feedback for students in a full-time community nursing degree course is appraised. The aim of this mixed methods study was to examine student views on audio feedback for written assignments. Questionnaires and a focus group were used to capture student opinion of this pilot project. The majority of students valued audio feedback…

  14. Responding Effectively to Composition Students: Comparing Student Perceptions of Written and Audio Feedback

    ERIC Educational Resources Information Center

    Bilbro, J.; Iluzada, C.; Clark, D. E.

    2013-01-01

    The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…

  15. Development and Evaluation of a Feedback Support System with Audio and Playback Strokes

    ERIC Educational Resources Information Center

    Li, Kai; Akahori, Kanji

    2008-01-01

    This paper describes the development and evaluation of a handwritten correction support system with audio and playback strokes used to teach Japanese writing. The study examined whether audio and playback strokes have a positive effect on students using honorific expressions in Japanese writing. The results showed that error feedback with audio

  16. Audio Use in E-Learning: What, Why, When, and How?

    ERIC Educational Resources Information Center

    Calandra, Brendan; Barron, Ann E.; Thompson-Sellers, Ingrid

    2008-01-01

    Decisions related to the implementation of audio in e-learning are perplexing for many instructional designers, and deciphering theory and principles related to audio use can be difficult for practitioners. Yet, as bandwidth on the Internet increases, digital audio is becoming more common in online courses. This article provides a review of…

  17. Parametric Packet-Layer Model for Evaluation Audio Quality in Multimedia Streaming Services

    NASA Astrophysics Data System (ADS)

    Egi, Noritsugu; Hayashi, Takanori; Takahashi, Akira

    We propose a parametric packet-layer model for monitoring audio quality in multimedia streaming services such as Internet protocol television (IPTV). This model estimates audio quality of experience (QoE) on the basis of quality degradation due to coding and packet loss of an audio sequence. The input parameters of this model are audio bit rate, sampling rate, frame length, packet-loss frequency, and average burst length. Audio bit rate, packet-loss frequency, and average burst length are calculated from header information in received IP packets. For sampling rate, frame length, and audio codec type, the values or the names used in monitored services are input into this model directly. We performed a subjective listening test to examine the relationships between these input parameters and perceived audio quality. The codec used in this test was the Advanced Audio Codec-Low Complexity (AAC-LC), which is one of the international standards for audio coding. On the basis of the test results, we developed an audio quality evaluation model. The verification results indicate that audio quality estimated by the proposed model has a high correlation with perceived audio quality.

  18. Development and Assessment of Web Courses That Use Streaming Audio and Video Technologies.

    ERIC Educational Resources Information Center

    Ingebritsen, Thomas S.; Flickinger, Kathleen

    Iowa State University, through a program called Project BIO (Biology Instructional Outreach), has been using RealAudio technology for about 2 years in college biology courses that are offered entirely via the World Wide Web. RealAudio is a type of streaming media technology that can be used to deliver audio content and a variety of other media…

  19. Machine wanting.

    PubMed

    McShea, Daniel W

    2013-12-01

    Wants, preferences, and cares are physical things or events, not ideas or propositions, and therefore no chain of pure logic can conclude with a want, preference, or care. It follows that no pure-logic machine will ever want, prefer, or care. And its behavior will never be driven in the way that deliberate human behavior is driven, in other words, it will not be motivated or goal directed. Therefore, if we want to simulate human-style interactions with the world, we will need to first understand the physical structure of goal-directed systems. I argue that all such systems share a common nested structure, consisting of a smaller entity that moves within and is driven by a larger field that contains it. In such systems, the smaller contained entity is directed by the field, but also moves to some degree independently of it, allowing the entity to deviate and return, to show the plasticity and persistence that is characteristic of goal direction. If all this is right, then human want-driven behavior probably involves a behavior-generating mechanism that is contained within a neural field of some kind. In principle, for goal directedness generally, the containment can be virtual, raising the possibility that want-driven behavior could be simulated in standard computational systems. But there are also reasons to believe that goal-direction works better when containment is also physical, suggesting that a new kind of hardware may be necessary. PMID:23792091

  20. Machine musicianship

    NASA Astrophysics Data System (ADS)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  1. Deutsch Durch Audio-Visuelle Methode: An Audio-Lingual-Oral Approach to the Teaching of German.

    ERIC Educational Resources Information Center

    Dickinson Public Schools, ND. Instructional Media Center.

    This teaching guide, designed to accompany Chilton's "Deutsch Durch Audio-Visuelle Methode" for German 1 and 2 in a three-year secondary school program, focuses major attention on the operational plan of the program and a student orientation unit. A section on teaching a unit discusses four phases: (1) presentation, (2) explanation, (3)…

  2. Planning Schools for Use of Audio-Visual Materials. No. 3: The Audio-Visual Materials Center.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC. Dept. of Audiovisual Instruction.

    This manual discusses the role, organizational patterns, expected services, and space and housing needs of the audio-visual instructional materials center. In considering the housing of basic functions, photographs, floor layouts, diagrams, and specifications of equipment are presented. An appendix includes a 77-item bibliography, a 7-page list of…

  3. The method of narrow-band audio classification based on universal noise background model

    NASA Astrophysics Data System (ADS)

    Rui, Rui; Bao, Chang-chun

    2013-03-01

    Audio classification is the basis of content-based audio analysis and retrieval. The conventional classification methods mainly depend on feature extraction of audio clip, which certainly increase the time requirement for classification. An approach for classifying the narrow-band audio stream based on feature extraction of audio frame-level is presented in this paper. The audio signals are divided into speech, instrumental music, song with accompaniment and noise using the Gaussian mixture model (GMM). In order to satisfy the demand of actual environment changing, a universal noise background model (UNBM) for white noise, street noise, factory noise and car interior noise is built. In addition, three feature schemes are considered to optimize feature selection. The experimental results show that the proposed algorithm achieves a high accuracy for audio classification, especially under each noise background we used and keep the classification time less than one second.

  4. Maintaining high-quality IP audio services in lossy IP network environments

    NASA Astrophysics Data System (ADS)

    Barton, Robert J., III; Chodura, Hartmut

    2000-07-01

    In this paper we present our research activities in the area of digital audio processing and transmission. Today's available teleconference audio solutions are lacking in flexibility, robustness and fidelity. There was a need for enhancing the quality of audio for IP-based applications to guarantee optimal services under varying conditions. Multiple tests and user evaluations have shown that a reliable audio communication toolkit is essential for any teleconference application. This paper summarizes our research activities and gives an overview of developed applications. In a first step the parameters, which influence the audio quality, were evaluated. All of these parameters have to be optimized in order to result into the best achievable quality. Therefore it was necessary to enhance existing schemes or develop new methods. Applications were developed for Internet-Telephony, broadcast of live music and spatial audio for Virtual Reality environments. This paper describes these applications and issues of delivering high quality digital audio services over lossy IP networks.

  5. Machine Shop Lathes.

    ERIC Educational Resources Information Center

    Dunn, James

    This guide, the second in a series of five machine shop curriculum manuals, was designed for use in machine shop courses in Oklahoma. The purpose of the manual is to equip students with basic knowledge and skills that will enable them to enter the machine trade at the machine-operator level. The curriculum is designed so that it can be used in…

  6. Applied machine vision

    SciTech Connect

    Not Available

    1984-01-01

    This book presents the papers given at a conference on robot vision. Topics considered at the conference included the link between fixed and flexible automation, general applications of machine vision, the development of a specification for a machine vision system, machine vision technology, machine vision non-contact gaging, and vision in electronics manufacturing.

  7. Sony's Data Discman: A Look at These New Portable Information Machines and What They Mean for CD-ROM Developers.

    ERIC Educational Resources Information Center

    Bonime, Andrew

    1992-01-01

    Describes a portable CD-ROM machine intended for the mass market that provides access to searchable text, graphics, and audio through a user-friendly interface. Six search modes and other system features are reviewed, and electronic texts for the unit are introduced. A table compares features of the two available models. (NRP)

  8. Characterization of HF Propagation for Digital Audio Broadcasting

    NASA Technical Reports Server (NTRS)

    Vaisnys, Arvydas

    1997-01-01

    The purpose of this presentation is to give a brief overview of some propagation measurements in the Short Wave (3-30 MHz) bands, made in support of a digital audio transmission system design for the Voice of America. This task is a follow on to the Digital Broadcast Satellite Radio task, during which several mitigation techniques would be applicable to digital audio in the Short Wave bands as well, in spite of the differences in propagation impairments in these two bands. Two series of propagation measurements were made to quantify the range of impairments that could be expected. An assessment of the performance of a prototype version of the receiver was also made.

  9. Machine and process characterization

    SciTech Connect

    Love, L.W.

    1992-12-01

    A study was conducted to statistically characterize 11 precision machining centers to determine their operating characteristics and process capabilities. Measurement probes and a ball plate were used for measurement analysis. A generic test part designed with geometric features that the department typically manufactures was machined using various machining processes. A better understanding of each machine's characteristics and process capability was realized through repeating these methods on each machine.

  10. NFL Films audio, video, and film production facilities

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.

  11. Robust video and audio-based synchronization of multimedia files

    NASA Astrophysics Data System (ADS)

    Raichel, Benjamin A.; Bajcsy, Peter

    2010-02-01

    This paper addresses the problem of robust and automated synchronization of multiple audio and video signals. The input signals are from a set of independent multimedia recordings coming from several camcorders and microphones. While the camcorders are static, the microphones are mobile as they are attached to people. The motivation for synchronization of all signals is to support studies and understanding of human interaction in a decision support environment that have been limited so far due to the difficulties in automated processing of any observations during the decision making sessions. The application of our work is to environments supporting decisions. The data sets for this work have been acquired during training exercises of response teams, rescue workers, and fire fighters at multiple locations. The developed synchronization methodology for a set of independent multimedia recordings is based on introducing aural and visual landmarks with a bell and room light switches. Our approach to synchronization is based on detecting the landmarks in audio and video signals per camcorder and per microphone, and then fusing the results to increase robustness and accuracy of the synchronization. We report synchronization results that demonstrate accuracy of synchronization based on video and audio.

  12. Young children's recall and reconstruction of audio and audiovisual narratives.

    PubMed

    Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C

    1986-08-01

    It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions. PMID:3757597

  13. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  14. Iowa Virtual Literacy Protocol: A Pre-Experimental Design Using Kurzweil 3000 Text-to-Speech Software with Incarcerated Adult Learners

    ERIC Educational Resources Information Center

    McCulley, Yvette K.

    2012-01-01

    The problem: The increasingly competitive global economy demands literate, educated workers. Both men and women experience the effects of education on employment rates and income. Racial and ethnic minorities, English language learners, and especially those with prison records are most deeply affected by the economic consequences of dropping out…

  15. The Effect of Embedded Text-to-Speech and Vocabulary eBook Scaffolds on the Comprehension of Students with Reading Disabilities

    ERIC Educational Resources Information Center

    Gonzalez, Michelle

    2014-01-01

    Limited research exists concerning the effect of interactive electronic texts or eBooks on the reading comprehension of students with reading disabilities. The purpose of this study was to determine if there was a significant difference in oral retelling and comprehension performance on multiple-choice questions when 17 students with reading…

  16. Hard Machinable Machining of Cobalt Super Alloys

    NASA Astrophysics Data System (ADS)

    ?ep, Robert; Janásek, Adam; Petr?, Jana; ?epová, Lenka; Sadílek, Marek; Kratochvíl, Ji?í

    2012-12-01

    The article deals with difficult-to-machine cobalt super alloys. The main aim is to test the basic properties of cobalt super alloys and propose suitable cutting materials and machining parameters under the designation 188 when machining. Although the development of technology in chipless machining such as moulding, precision casting and other manufacturing methods continues to advance, machining is still the leading choice for piece production, typical for energy and chemical engineering. Nowadays, super alloys are commonly used in turbine engines in regions that are subject to high temperatures, which require high strength, high temperature resistance, phase stability, as well as corrosion or oxidation resistance.

  17. Guidelines for the integration of audio cues into computer user interfaces

    SciTech Connect

    Sumikawa, D.A.

    1985-06-01

    Throughout the history of computers, vision has been the main channel through which information is conveyed to the computer user. As the complexities of man-machine interactions increase, more and more information must be transferred from the computer to the user and then successfully interpreted by the user. A logical next step in the evolution of the computer-user interface is the incorporation of sound and thereby using the sense of ''hearing'' in the computer experience. This allows our visual and auditory capabilities to work naturally together in unison leading to more effective and efficient interpretation of all information received by the user from the computer. This thesis presents an initial set of guidelines to assist interface developers in designing an effective sight and sound user interface. This study is a synthesis of various aspects of sound, human communication, computer-user interfaces, and psychoacoustics. We introduce the notion of an earcon. Earcons are audio cues used in the computer-user interface to provide information and feedback to the user about some computer object, operation, or interaction. A possible construction technique for earcons, the use of earcons in the interface, how earcons are learned and remembered, and the affects of earcons on their users are investigated. This study takes the point of view that earcons are a language and human/computer communication issue and are therefore analyzed according to the three dimensions of linguistics; syntactics, semantics, and pragmatics.

  18. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability. PMID:25097883

  19. High Capacity Reversible Watermarking for Audio by Histogram Shifting and Predicted Error Expansion

    PubMed Central

    Wang, Fei; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability. PMID:25097883

  20. AudioSense: Enabling Real-time Evaluation of Hearing Aid Technology In-Situ

    PubMed Central

    Hasan, Syed Shabih; Lai, Farley; Chipara, Octav; Wu, Yu-Hsiang

    2014-01-01

    AudioSense integrates mobile phones and web technology to measure hearing aid performance in real-time and in-situ. Measuring the performance of hearing aids in the real world poses significant challenges as it depends on the patient's listening context. AudioSense uses Ecological Momentary Assessment methods to evaluate both the perceived hearing aid performance as well as to characterize the listening environment using electronic surveys. AudioSense further characterizes a patient's listening context by recording their GPS location and sound samples. By creating a time-synchronized record of listening performance and listening contexts, AudioSense will allow researchers to understand the relationship between listening context and hearing aid performance. Performance evaluation shows that AudioSense is reliable, energy-efficient, and can estimate Signal-to-Noise Ratio (SNR) levels from captured audio samples. PMID:25013874

  1. AudioSense: Enabling Real-time Evaluation of Hearing Aid Technology In-Situ.

    PubMed

    Hasan, Syed Shabih; Lai, Farley; Chipara, Octav; Wu, Yu-Hsiang

    2013-01-01

    AudioSense integrates mobile phones and web technology to measure hearing aid performance in real-time and in-situ. Measuring the performance of hearing aids in the real world poses significant challenges as it depends on the patient's listening context. AudioSense uses Ecological Momentary Assessment methods to evaluate both the perceived hearing aid performance as well as to characterize the listening environment using electronic surveys. AudioSense further characterizes a patient's listening context by recording their GPS location and sound samples. By creating a time-synchronized record of listening performance and listening contexts, AudioSense will allow researchers to understand the relationship between listening context and hearing aid performance. Performance evaluation shows that AudioSense is reliable, energy-efficient, and can estimate Signal-to-Noise Ratio (SNR) levels from captured audio samples. PMID:25013874

  2. Stirling machine operating experience

    NASA Technical Reports Server (NTRS)

    Ross, Brad; Dudenhoefer, James E.

    1991-01-01

    Numerous Stirling machines have been built and operated, but the operating experience of these machines is not well known. It is important to examine this operating experience in detail, because it largely substantiates the claim that Stirling machines are capable of reliable and lengthy lives. The amount of data that exists is impressive, considering that many of the machines that have been built are developmental machines intended to show proof of concept, and were not expected to operate for any lengthy period of time. Some Stirling machines (typically free-piston machines) achieve long life through non-contact bearings, while other Stirling machines (typically kinematic) have achieved long operating lives through regular seal and bearing replacements. In addition to engine and system testing, life testing of critical components is also considered.

  3. Women, Men, and Machines.

    ERIC Educational Resources Information Center

    Form, William; McMillen, David Byron

    1983-01-01

    Data from the first national study of technological change show that proportionately more women than men operate machines, are more exposed to machines that have alienating effects, and suffer more from the negative effects of technological change. (Author/SSH)

  4. Spatial Audio on the Web: Or Why Can't I hear Anything Over There?

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)

    1997-01-01

    Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.

  5. Reverse vending machine update

    SciTech Connect

    Rypins, S.; Papke, C.

    1986-02-01

    The document discusses reverse vending machines. Placed outdoors in supermarket parking lots or indoors in the lobby of the grocery market, these hightech machines exchange aluminum cans (or other containers in more specialized machines) for cash, coupons or redeemable receipts. The placement of reverse venders (RV) in or near supermarkets has made recycling more visible and more convenient, although the machines have yet to fully reach industry goals.

  6. Music and audio - oh how they can stress your network

    NASA Astrophysics Data System (ADS)

    Fletcher, R.

    Nearly ten years ago a paper written by the Audio Engineering Society (AES)[1] made a number of interesting statements: 1. 2. The current Internet is inadequate for transmitting music and professional audio. Performance and collaboration across a distance stress beyond acceptable bounds the quality of service Audio and music provide test cases in which the bounds of the network are quickly reached and through which the defects in a network are readily perceived. Given these key points, where are we now? Have we started to solve any of the problems from the musician's point of view? What is it that musician would like to do that can cause the network so many problems? To understand this we need to appreciate that a trained musician's ears are extremely sensitive to very subtle shifts in temporal materials and localisation information. A shift of a few milliseconds can cause difficulties. So, can modern networks provide the temporal accuracy demanded at this level? The sample and bit rates needed to represent music in the digital domain is still contentious, but a general consensus in the professional world is for 96 KHz and IEEE 64-bit floating point. If this was to be run between two points on the network across 24 channels in near real time to allow for collaborative composition/production/performance, with QOS settings to allow as near to zero latency and jitter, it can be seen that the network indeed has to perform very well. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK 26-28 March, 200

  7. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  8. Realization of guitar audio effects using methods of digital signal processing

    NASA Astrophysics Data System (ADS)

    BuĹ›, Szymon; Jedrzejewski, Konrad

    2015-09-01

    The paper is devoted to studies on possibilities of realization of guitar audio effects by means of methods of digital signal processing. As a result of research, some selected audio effects corresponding to the specifics of guitar sound were realized as the real-time system called Digital Guitar Multi-effect. Before implementation in the system, the selected effects were investigated using the dedicated application with a graphical user interface created in Matlab environment. In the second stage, the real-time system based on a microcontroller and an audio codec was designed and realized. The system is designed to perform audio effects on the output signal of an electric guitar.

  9. A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Shrum, Richard; Miller, Joel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Our work in virtual reality systems at NASA Ames Research Center includes the area of aurally-guided visual search, using specially-designed audio cues and spatial audio processing (also known as virtual or "3-D audio") techniques (Begault, 1994). Previous studies at Ames had revealed that use of 3-D audio for Traffic Collision Avoidance System (TCAS) advisories significantly reduced head-down time, compared to a head-down map display (0.5 sec advantage) or no display at all (2.2 sec advantage) (Begault, 1993, 1995; Begault & Pittman, 1994; see Wenzel, 1994, for an audio demo). Since the crew must keep their head up and looking out the window as much as possible when taxiing under low-visibility conditions, and the potential for "blunder" is increased under such conditions, it was sensible to evaluate the audio spatial cueing for a prototype audio ground collision avoidance warning (GCAW) system, and a 3-D audio guidance system. Results were favorable for GCAW, but not for the audio guidance system.

  10. ASTP video tape recorder ground support equipment (audio/CTE splitter/interleaver). Operations manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A descriptive handbook for the audio/CTE splitter/interleaver (RCA part No. 8673734-502) was presented. This unit is designed to perform two major functions: extract audio and time data from an interleaved video/audio signal (splitter section), and provide a test interleaved video/audio/CTE signal for the system (interleaver section). It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.

  11. Apprentice Machine Theory Outline.

    ERIC Educational Resources Information Center

    Connecticut State Dept. of Education, Hartford. Div. of Vocational-Technical Schools.

    This volume contains outlines for 16 courses in machine theory that are designed for machine tool apprentices. Addressed in the individual course outlines are the following topics: basic concepts; lathes; milling machines; drills, saws, and shapers; heat treatment and metallurgy; grinders; quality control; hydraulics and pneumatics;…

  12. Your Sewing Machine.

    ERIC Educational Resources Information Center

    Peacock, Marion E.

    The programed instruction manual is designed to aid the student in learning the parts, uses, and operation of the sewing machine. Drawings of sewing machine parts are presented, and space is provided for the student's written responses. Following an introductory section identifying sewing machine parts, the manual deals with each part and its…

  13. TV audio and video on the same channel

    NASA Technical Reports Server (NTRS)

    Hopkins, J. B.

    1979-01-01

    Transmitting technique adds audio to video signal during vertical blanking interval. SIVI (signal in the vertical interval) is used by TV networks and stations to transmit cuing and automatic-switching tone signals to augment automatic and manual operations. It can also be used to transmit one-way instructional information, such as bulletin alerts, program changes, and commercial-cutaway aural cues from the networks to affiliates. Additonally, it can be used as extra sound channel for second-language transmission to biligual stations.

  14. An assessment of individualized technical ear training for audio production.

    PubMed

    Kim, Sungyoung

    2015-07-01

    An individualized technical ear training method is compared to a non-individualized method. The efficacy of the individualized method is assessed using a standardized test conducted before and after the training period. Participants who received individualized training improved better than the control group on the test. Results indicate the importance of individualized training for acquisition of spectrum-identification and spectrum-matching skills. Individualized training, therefore, should be implemented by default into technical ear training programs used in audio production industry and education. PMID:26233051

  15. Detecting Hubs in Music Audio Based on Network Analysis

    NASA Astrophysics Data System (ADS)

    Nanopoulos, Alexandros

    Spectral similarity measures are considered among the best-performing audio-based music similarity measures. However, they tend to produce hubs, i.e., songs measured closely to many other songs, to which they have no perceptual similarity. In this paper, we define a novel way to measure the hubness of songs. Based on network analysis methods, we propose a hubness score that is computed by analyzing the interaction of songs in the similarity space. We experimentally evaluate the effectiveness of the proposed approach.

  16. Audio-vocal responses elicited in adult cochlear implant users

    PubMed Central

    Loucks, Torrey M.; Suneel, Deepa; Aronoff, Justin M.

    2015-01-01

    Auditory deprivation experienced prior to receiving a cochlear implant could compromise neural connections that allow for modulation of vocalization using auditory feedback. In this report, pitch-shift stimuli were presented to adult cochlear implant users to test whether compensatory motor changes in vocal F0 could be elicited. In five of six participants, rapid adjustments in vocal F0 were detected following the stimuli, which resemble the cortically mediated pitch-shift responses observed in typical hearing individuals. These findings suggest that cochlear implants can convey vocal F0 shifts to the auditory pathway that might benefit audio-vocal monitoring. PMID:26520350

  17. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  18. Fast Huffman encoding algorithms in MPEG-4 advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2014-11-01

    This paper addresses the optimisation problem of Huffman encoding in MPEG-4 Advanced Audio Coding stan- dard. At first, the Huffman encoding problem and the need of encoding two side info parameters scale factor and Huffman codebook are presented. Next, Two Loop Search, Maximum Noise Mask Ratio and Trellis Based algorithms of bit allocation are briefly described. Further, Huffman encoding optimisation are shown. New methods try to check and change scale factor bands as little as possible to estimate bitrate cost or its change. Finally, the complexity of old and new methods is calculated, compared and measured time of encoding is given.

  19. Audio-vocal responses elicited in adult cochlear implant users.

    PubMed

    Loucks, Torrey M; Suneel, Deepa; Aronoff, Justin M

    2015-10-01

    Auditory deprivation experienced prior to receiving a cochlear implant could compromise neural connections that allow for modulation of vocalization using auditory feedback. In this report, pitch-shift stimuli were presented to adult cochlear implant users to test whether compensatory motor changes in vocal F0 could be elicited. In five of six participants, rapid adjustments in vocal F0 were detected following the stimuli, which resemble the cortically mediated pitch-shift responses observed in typical hearing individuals. These findings suggest that cochlear implants can convey vocal F0 shifts to the auditory pathway that might benefit audio-vocal monitoring. PMID:26520350

  20. Statistical Lip-Appearance Models Trained Automatically Using Audio Information

    NASA Astrophysics Data System (ADS)

    Daubias, Philippe; Deléglise, Paul

    2002-12-01

    We aim at modeling the appearance of the lower face region to assist visual feature extraction for audio-visual speech processing applications. In this paper, we present a neural network based statistical appearance model of the lips which classifies pixels as belonging to the lips, skin, or inner mouth classes. This model requires labeled examples to be trained, and we propose to label images automatically by employing a lip-shape model and a red-hue energy function. To improve the performance of lip-tracking, we propose to use blue marked-up image sequences of the same subject uttering the identical sentences as natural nonmarked-up ones. The easily extracted lip shapes from blue images are then mapped to the natural ones using acoustic information. The lip-shape estimates obtained simplify lip-tracking on the natural images, as they reduce the parameter space dimensionality in the red-hue energy minimization, thus yielding better contour shape and location estimates. We applied the proposed method to a small audio-visual database of three subjects, achieving errors in pixel classification around 6%, compared to 3% for hand-placed contours and 20% for filtered red-hue.

  1. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  2. Audio-tactile integration and the influence of musical training.

    PubMed

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training. PMID:24465675

  3. Head Tracking of Auditory, Visual, and Audio-Visual Targets

    PubMed Central

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2016-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual “bisensory” stimuli. Three metrics were measured—onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952

  4. Audio annotation watermarking with robustness against DA/AD conversion

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Kraetzer, Christian; Biermann, Michael; Dittmann, Jana

    2010-01-01

    In the paper we present a watermarking scheme developed to meet the specific requirements of audio annotation watermarking robust against DA/AD conversion (watermark detection after playback by loudspeaker and recording with a microphone). Additionally the described approach tries to achieve a comparably low detection complexity, so it could be embedded in the near future in low-end devices (e.g. mobile phones or other portable devices). We assume in the field of annotation watermarking that there is no specific motivation for attackers to the developed scheme. The basic idea for the watermark generation and embedding scheme is to combine traditional frequency domain spread spectrum watermarking with psychoacoustic modeling to guarantee transparency and alphabet substitution to improve the robustness. The synchronization and extraction scheme is designed to be much less computational complex than the embedder. The performance of the scheme is evaluated in the aspects of transparency, robustness, complexity and capacity. The tests reveals that 44% out of 375 tested audio files pass the simulation test for robustness, while the most appropriate category shows even 100% robustness. Additionally the introduced prototype shows an averge transparency of -1.69 in SDG, while at the same time having a capacity satisfactory to the chosen application scenario.

  5. Audio-Tactile Integration and the Influence of Musical Training

    PubMed Central

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training. PMID:24465675

  6. Perspex machine: VII. The universal perspex machine

    NASA Astrophysics Data System (ADS)

    Anderson, James A. D. W.

    2006-01-01

    The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and, perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to program than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.

  7. Audio-Based versus Text-Based Asynchronous Online Discussion: Two Case Studies

    ERIC Educational Resources Information Center

    Hew, Khe Foon; Cheung, Wing Sum

    2013-01-01

    The main objective of this paper is to examine the use of audio- versus text-based asynchronous online discussions. We report two case studies conducted within the context of semester-long teacher education courses at an Asian Pacific university. Forty-one graduate students participated in Study I. After the online discussions (both audio-based as…

  8. Investigating Expectations and Experiences of Audio and Written Assignment Feedback in First-Year Undergraduate Students

    ERIC Educational Resources Information Center

    Fawcett, Hannah; Oldfield, Jeremy

    2016-01-01

    Previous research suggests that audio feedback may be an important mechanism for facilitating effective and timely assignment feedback. The present study examined expectations and experiences of audio and written feedback provided through "turnitin for iPad®" from students within the same cohort and assignment. The results showed that…

  9. 76 FR 591 - Determination of Rates and Terms for Preexisting Subscription and Satellite Digital Audio Radio...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ..., respectively. 72 FR 71795 (December 19, 2007), 73 FR 4080 (January 24, 2008). Section 804(b)(3)(B) of the... Audio Radio Services AGENCY: Copyright Royalty Board, Library of Congress. ACTION: Notice announcing... subscription and satellite digital audio radio services for the digital performance of sound recordings and...

  10. 47 CFR 73.4275 - Tone clusters; audio attention-getting devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Tone clusters; audio attention-getting devices. 73.4275 Section 73.4275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... clusters; audio attention-getting devices. See Public Notice, FCC 76-610, dated July 2, 1976. 60 FCC 2d...

  11. "Listen to This!" Utilizing Audio Recordings to Improve Instructor Feedback on Writing in Mathematics

    ERIC Educational Resources Information Center

    Weld, Christopher

    2014-01-01

    Providing audio files in lieu of written remarks on graded assignments is arguably a more effective means of feedback, allowing students to better process and understand the critique and improve their future work. With emerging technologies and software, this audio feedback alternative to the traditional paradigm of providing written comments…

  12. Age Matters: Student Experiences with Audio Learning Guides in University-Based Continuing Education

    ERIC Educational Resources Information Center

    Mercer, Lorraine; Pianosi, Birgit

    2012-01-01

    The primary objective of this research was to explore the experiences of undergraduate distance education students using sample audio versions (provided on compact disc) of the learning guides for their courses. The results of this study indicated that students responded positively to the opportunity to have word-for-word audio versions of their…

  13. When I Stopped Writing on Their Papers: Accommodating the Needs of Student Writers with Audio Comments

    ERIC Educational Resources Information Center

    Bauer, Sara

    2011-01-01

    The author finds using software to make audio comments on students' writing improves students' understanding of her responses and increases their willingness to take her suggestions for revision more seriously. In the process of recording audio comments, she came to a new understanding of her students' writing needs and her responsibilities as…

  14. Effects of Audio-Visual Information on the Intelligibility of Alaryngeal Speech

    ERIC Educational Resources Information Center

    Evitts, Paul M.; Portugal, Lindsay; Van Dine, Ami; Holler, Aline

    2010-01-01

    Background: There is minimal research on the contribution of visual information on speech intelligibility for individuals with a laryngectomy (IWL). Aims: The purpose of this project was to determine the effects of mode of presentation (audio-only, audio-visual) on alaryngeal speech intelligibility. Method: Twenty-three naive listeners were…

  15. Rethinking the Red Ink: Audio-Feedback in the ESL Writing Classroom.

    ERIC Educational Resources Information Center

    Johanson, Robert

    1999-01-01

    This paper describes audio-feedback as a teaching method for English-as-a-Second-Language (ESL) writing classes. Using this method, writing instructors respond to students' compositions by recording their comments onto an audiocassette, then returning the paper and cassette to the students. The first section describes audio-feedback and explains…

  16. Active Learning in the Online Environment: The Integration of Student-Generated Audio Files

    ERIC Educational Resources Information Center

    Bolliger, Doris U.; Armier, David Des, Jr.

    2013-01-01

    Educators have integrated instructor-produced audio files in a variety of settings and environments for purposes such as content presentation, lecture reviews, student feedback, and so forth. Few instructors, however, require students to produce audio files and share them with peers. The purpose of this study was to obtain empirical data on…

  17. Age Matters: Student Experiences with Audio Learning Guides in University-Based Continuing Education

    ERIC Educational Resources Information Center

    Mercer, Lorraine; Pianosi, Birgit

    2012-01-01

    The primary objective of this research was to explore the experiences of undergraduate distance education students using sample audio versions (provided on compact disc) of the learning guides for their courses. The results of this study indicated that students responded positively to the opportunity to have word-for-word audio versions of their…

  18. Investigating Expectations and Experiences of Audio and Written Assignment Feedback in First-Year Undergraduate Students

    ERIC Educational Resources Information Center

    Fawcett, Hannah; Oldfield, Jeremy

    2016-01-01

    Previous research suggests that audio feedback may be an important mechanism for facilitating effective and timely assignment feedback. The present study examined expectations and experiences of audio and written feedback provided through "turnitin for iPad®" from students within the same cohort and assignment. The results showed that…

  19. 36 CFR 5.5 - Commercial filming, still photography, and audio recording.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... schedule for still photography conducted under a permit issued under 43 CFR part 5 applies to audio... of 43 CFR part 5, subpart A. Failure to comply with any provision of 43 CFR part 5 is a violation of... photography, and audio recording. 5.5 Section 5.5 Parks, Forests, and Public Property NATIONAL PARK...

  20. "Listen to This!" Utilizing Audio Recordings to Improve Instructor Feedback on Writing in Mathematics

    ERIC Educational Resources Information Center

    Weld, Christopher

    2014-01-01

    Providing audio files in lieu of written remarks on graded assignments is arguably a more effective means of feedback, allowing students to better process and understand the critique and improve their future work. With emerging technologies and software, this audio feedback alternative to the traditional paradigm of providing written comments…

  1. Active Learning in the Online Environment: The Integration of Student-Generated Audio Files

    ERIC Educational Resources Information Center

    Bolliger, Doris U.; Armier, David Des, Jr.

    2013-01-01

    Educators have integrated instructor-produced audio files in a variety of settings and environments for purposes such as content presentation, lecture reviews, student feedback, and so forth. Few instructors, however, require students to produce audio files and share them with peers. The purpose of this study was to obtain empirical data on…

  2. Students' Attitudes to and Usage of Academic Feedback Provided via Audio Files

    ERIC Educational Resources Information Center

    Merry, Stephen; Orsmond, Paul

    2008-01-01

    This study explores students' attitudes to the provision of formative feedback on academic work using audio files together with the ways in which students implement such feedback within their learning. Fifteen students received audio file feedback on written work and were subsequently interviewed regarding their utilisation of that feedback within…

  3. Experiments With Audio-Tutorial Learning Systems at I.U.B.

    ERIC Educational Resources Information Center

    Indiana Univ., Bloomington. Teaching Resources Center.

    An audio-tutorial approach that has been implemented at Indiana University in Bloomington focuses on student learning rather than the mechanism of teaching. The basic component of an audio-tutorial course is a taped presentation that acts as a tutor, guiding the students through a sequence of readings, filmstrips, diagrams, tables, and other…

  4. Seeing to Hear Better: Evidence for Early Audio-Visual Interactions in Speech Identification

    ERIC Educational Resources Information Center

    Schwartz, Jean-Luc; Berthommier, Frederic; Savariaux, Christophe

    2004-01-01

    Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances "sensitivity" to acoustic information, decreasing the…

  5. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    ERIC Educational Resources Information Center

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  6. Guidelines for the Production of Audio Materials for Print Handicapped Readers.

    ERIC Educational Resources Information Center

    National Library of Australia, Canberra.

    Procedural guidelines developed by the Audio Standards Committee of the National Library of Australia to help improve the overall quality of production of audio materials for visually handicapped readers are presented. This report covers the following areas: selection of narrators and the narration itself; copyright; recording of books, magazines,…

  7. 47 CFR 73.4275 - Tone clusters; audio attention-getting devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Tone clusters; audio attention-getting devices. 73.4275 Section 73.4275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... clusters; audio attention-getting devices. See Public Notice, FCC 76-610, dated July 2, 1976. 60 FCC 2d...

  8. 47 CFR 73.4275 - Tone clusters; audio attention-getting devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Tone clusters; audio attention-getting devices. 73.4275 Section 73.4275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... clusters; audio attention-getting devices. See Public Notice, FCC 76-610, dated July 2, 1976. 60 FCC 2d...

  9. 47 CFR 73.4275 - Tone clusters; audio attention-getting devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Tone clusters; audio attention-getting devices. 73.4275 Section 73.4275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... clusters; audio attention-getting devices. See Public Notice, FCC 76-610, dated July 2, 1976. 60 FCC 2d...

  10. 47 CFR 73.4275 - Tone clusters; audio attention-getting devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Tone clusters; audio attention-getting devices. 73.4275 Section 73.4275 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... clusters; audio attention-getting devices. See Public Notice, FCC 76-610, dated July 2, 1976. 60 FCC 2d...

  11. LiveDescribe: Can Amateur Describers Create High-Quality Audio Description?

    ERIC Educational Resources Information Center

    Branje, Carmen J.; Fels, Deborah I.

    2012-01-01

    Introduction: The study presented here evaluated the usability of the audio description software LiveDescribe and explored the acceptance rates of audio description created by amateur describers who used LiveDescribe to facilitate the creation of their descriptions. Methods: Twelve amateur describers with little or no previous experience with…

  12. Temporal Interval Discrimination Thresholds Depend on Perceived Synchrony for Audio-Visual Stimulus Pairs

    ERIC Educational Resources Information Center

    van Eijk, Rob L. J.; Kohlrausch, Armin; Juola, James F.; van de Par, Steven

    2009-01-01

    Audio-visual stimulus pairs presented at various relative delays, are commonly judged as being "synchronous" over a range of delays from about -50 ms (audio leading) to +150 ms (video leading). The center of this range is an estimate of the point of subjective simultaneity (PSS). The judgment boundaries, where "synchronous" judgments yield to a…

  13. Planning Schools for Use of Audio-Visual Materials. No. 1--Classrooms, 3rd Edition.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    Intended to inform school board administrators and teachers of the current (1958) thinking on audio-visual instruction for use in planning new buildings, purchasing equipment, and planning instruction. Attention is given the problem of overcoming obstacles to the incorporation of audio-visual materials into the curriculum. Discussion includes--(1)…

  14. The SWRL Audio Laboratory System (ALS): An Integrated Configuration for Psychomusicology Research. Technical Report 51.

    ERIC Educational Resources Information Center

    Williams, David Brian; Hoskin, Richard K.

    This report describes features of the Audio Laboratory System (ALS), a device which supports research activities of the Southwest Regional Laboratory's Music Program. The ALS is used primarily to generate recorded audio tapes for psychomusicology research related to children's perception and learning of music concepts such as pitch, loudness,…

  15. An Interactive Concert Program Based on Infrared Watermark and Audio Synthesis

    NASA Astrophysics Data System (ADS)

    Wang, Hsi-Chun; Lee, Wen-Pin Hope; Liang, Feng-Ju

    The objective of this research is to propose a video/audio system which allows the user to listen the typical music notes in the concert program under infrared detection. The system synthesizes audio with different pitches and tempi in accordance with the encoded data in a 2-D barcode embedded in the infrared watermark. The digital halftoning technique has been used to fabricate the infrared watermark composed of halftone dots by both amplitude modulation (AM) and frequency modulation (FM). The results show that this interactive system successfully recognizes the barcode and synthesizes audio under infrared detection of a concert program which is also valid for human observation of the contents. This interactive video/audio system has greatly expanded the capability of the printout paper to audio display and also has many potential value-added applications.

  16. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  17. Machine tool locator

    DOEpatents

    Hanlon, John A.; Gill, Timothy J.

    2001-01-01

    Machine tools can be accurately measured and positioned on manufacturing machines within very small tolerances by use of an autocollimator on a 3-axis mount on a manufacturing machine and positioned so as to focus on a reference tooling ball or a machine tool, a digital camera connected to the viewing end of the autocollimator, and a marker and measure generator for receiving digital images from the camera, then displaying or measuring distances between the projection reticle and the reference reticle on the monitoring screen, and relating the distances to the actual position of the autocollimator relative to the reference tooling ball. The images and measurements are used to set the position of the machine tool and to measure the size and shape of the machine tool tip, and examine cutting edge wear. patent

  18. Machine-augmented composites

    NASA Astrophysics Data System (ADS)

    Hawkins, Gary F.; O'Brien, Michael; Zaldivar, Rafael; von Bremen, Hubertus

    2002-07-01

    We have recently demonstrated that composites with unique properties can be manufactured by embedding many small simple machines in a matrix instead of fibers. We have been referring to these as Machine Augmented Composites (MAC). The simple machines modify the forces inside the material in a manner chosen by the material designer. When these machines are densely packed, the MAC takes on the properties of the machines as a fiber-reinforced composite takes on the properties of the fibers. In this paper we describe the Machine Augmented Composite concept and give the results of both theoretical and experimental studies. Applications for the material in clamping mechanisms, fasteners, gaskets and seals are presented. In addition, manufacturing issues are discussed showing how the material can be produced inexpensively.

  19. Debugging the virtual machine

    SciTech Connect

    Miller, P.; Pizzi, R.

    1994-09-02

    A computer program is really nothing more than a virtual machine built to perform a task. The program`s source code expresses abstract constructs using low level language features. When a virtual machine breaks, it can be very difficult to debug because typical debuggers provide only low level machine implementation in formation to the software engineer. We believe that the debugging task can be simplified by introducing aspects of the abstract design into the source code. We introduce OODIE, an object-oriented language extension that allows programmers to specify a virtual debugging environment which includes the design and abstract data types of the virtual machine.

  20. Evaluation of machinability data

    SciTech Connect

    Jin, L.Z.; Sandstroem, R. . Division of Materials Technology)

    1994-05-01

    Systematic materials selection is essential to fulfill the design criteria. Reliable information on material properties, in turn, is a vital factor for approaching such an objective. The machinability of engineering metals, owing to the marked influence on the production costs, has to be taken into account in the process of materials selection. In an attempt to develop a method for estimating the machinability of engineering metals, machinability data collected from laboratory and literature are assessed. A rating system derived from the metal removal rate is proposed for estimating the relative machinability of carbon and alloy steels, stainless steels, and aluminum, copper, and magnesium alloys.

  1. Chaotic Boltzmann machines

    PubMed Central

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented. PMID:23558425

  2. Perspex machine II: visualization

    NASA Astrophysics Data System (ADS)

    Anderson, James A. D. W.

    2005-01-01

    We review the perspex machine and improve it by reducing its halting conditions to one condition. We also introduce a data structure, called the "access column," that can accelerate a wide class of perspex programs. We show how the perspex can be visualised as a tetrahedron, artificial neuron, computer program, and as a geometrical transformation. We discuss the temporal properties of the perspex machine, dissolve the famous time travel paradox, and present a hypothetical time machine. Finally, we discuss some mental properties and show how the perspex machine solves the mind-body problem and, specifically, how it provides one physical explanation for the occurrence of paradigm shifts.

  3. Perspex machine II: visualization

    NASA Astrophysics Data System (ADS)

    Anderson, James A. D. W.

    2004-12-01

    We review the perspex machine and improve it by reducing its halting conditions to one condition. We also introduce a data structure, called the "access column," that can accelerate a wide class of perspex programs. We show how the perspex can be visualised as a tetrahedron, artificial neuron, computer program, and as a geometrical transformation. We discuss the temporal properties of the perspex machine, dissolve the famous time travel paradox, and present a hypothetical time machine. Finally, we discuss some mental properties and show how the perspex machine solves the mind-body problem and, specifically, how it provides one physical explanation for the occurrence of paradigm shifts.

  4. The Brussels Mood Inductive Audio Stories (MIAS) database.

    PubMed

    Bertels, Julie; Deliens, Gaétane; Peigneux, Philippe; Destrebecqz, Arnaud

    2014-12-01

    Through this study, we aimed to validate a new tool for inducing moods in experimental contexts. Five audio stories with sad, joyful, frightening, erotic, or neutral content were presented to 60 participants (33 women, 27 men) in a within-subjects design, each for about 10 min. Participants were asked (1) to report their moods before and after listening to each story, (2) to assess the emotional content of the excerpts on various emotional scales, and (3) to rate their level of projection into the stories. The results confirmed our a priori emotional classification. The emotional stories were effective in inducing the desired mood, with no difference found between male and female participants. These stories therefore constitute a valuable corpus for inducing moods in French-speaking participants, and they are made freely available for use in scientific research. PMID:24519495

  5. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  6. Audio-visual speech perception: a developmental ERP investigation.

    PubMed

    Knowland, Victoria C P; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S C

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  7. Comparing the Effects of Classroom Audio-Recording and Video-Recording on Preservice Teachers' Reflection of Practice

    ERIC Educational Resources Information Center

    Bergman, Daniel

    2015-01-01

    This study examined the effects of audio and video self-recording on preservice teachers' written reflections. Participants (n = 201) came from a secondary teaching methods course and its school-based (clinical) fieldwork. The audio group (n[subscript A] = 106) used audio recorders to monitor their teaching in fieldwork placements; the video group…

  8. 47 CFR 25.144 - Licensing provisions for the 2.3 GHz satellite digital audio radio service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... digital audio radio service. 25.144 Section 25.144 Telecommunication FEDERAL COMMUNICATIONS COMMISSION....144 Licensing provisions for the 2.3 GHz satellite digital audio radio service. (a) Qualification... digital audio radio service in the 2310-2360 MHz band shall describe in detail the proposed...

  9. Comparing the Effects of Classroom Audio-Recording and Video-Recording on Preservice Teachers' Reflection of Practice

    ERIC Educational Resources Information Center

    Bergman, Daniel

    2015-01-01

    This study examined the effects of audio and video self-recording on preservice teachers' written reflections. Participants (n = 201) came from a secondary teaching methods course and its school-based (clinical) fieldwork. The audio group (n[subscript A] = 106) used audio recorders to monitor their teaching in fieldwork placements; the video group…

  10. Diamond machine tool face lapping machine

    DOEpatents

    Yetter, H.H.

    1985-05-06

    An apparatus for shaping, sharpening and polishing diamond-tipped single-point machine tools. The isolation of a rotating grinding wheel from its driving apparatus using an air bearing and causing the tool to be shaped, polished or sharpened to be moved across the surface of the grinding wheel so that it does not remain at one radius for more than a single rotation of the grinding wheel has been found to readily result in machine tools of a quality which can only be obtained by the most tedious and costly processing procedures, and previously unattainable by simple lapping techniques.

  11. Drum cutter mining machine

    SciTech Connect

    Oberste-beulmann, K.; Schupphaus, H.

    1980-02-19

    A drum cutter mining machine includes a machine frame with a winch having a drive wheel to engage a rack or chain which extends along the path of travel by the mining machine to propel the machine along a mine face. The mining machine is made up of discrete units which include a machine body and machine housings joined to opposite sides of the machine body. The winch is either coupled through a drive train with a feed drive motor or coupled to the drive motor for cutter drums. The machine housings each support a pivot shaft coupled by an arm to a drum cutter. One of these housings includes a removable end cover and a recess adapted to receive a support housing for a spur gear system used to transmit torque from a feed drive motor to a reduction gear system which is, in turn, coupled to the drive wheel of the winch. In one embodiment, a removable end cover on the machine housing provides access to the feed drive motor. The feed drive motor is arranged so that the rotational axis of its drive output shaft extends transversely to the stow side of the machine frame. In another embodiment, the reduction gear system is arranged at one side of the pivot shaft for the cutter drum while the drive motor therefor is arranged at the other side of the pivot shaft and coupled thereto through the spur gear system. In a further embodiment, the reduction gear system is disposed between the feed motor and the pivot shaft.

  12. Automatic soldering machine

    NASA Technical Reports Server (NTRS)

    Stein, J. A.

    1974-01-01

    Fully-automatic tube-joint soldering machine can be used to make leakproof joints in aluminum tubes of 3/16 to 2 in. in diameter. Machine consists of temperature-control unit, heater transformer and heater head, vibrator, and associated circuitry controls, and indicators.

  13. Simple Machines Made Simple.

    ERIC Educational Resources Information Center

    St. Andre, Ralph E.

    Simple machines have become a lost point of study in elementary schools as teachers continue to have more material to cover. This manual provides hands-on, cooperative learning activities for grades three through eight concerning the six simple machines: wheel and axle, inclined plane, screw, pulley, wedge, and lever. Most activities can be…

  14. Compound taper milling machine

    NASA Technical Reports Server (NTRS)

    Campbell, N. R.

    1969-01-01

    Simple, inexpensive milling machine tapers panels from a common apex to a uniform height at panel edge regardless of the panel perimeter configuration. The machine consists of an adjustable angled beam upon which the milling tool moves back and forth above a rotatable table upon which the workpiece is held.

  15. The Hooey Machine.

    ERIC Educational Resources Information Center

    Scarnati, James T.; Tice, Craig J.

    1992-01-01

    Describes how students can make and use Hooey Machines to learn how mechanical energy can be transferred from one object to another within a system. The Hooey Machine is made using a pencil, eight thumbtacks, one pushpin, tape, scissors, graph paper, and a plastic lid. (PR)

  16. Simple Machine Junk Cars

    ERIC Educational Resources Information Center

    Herald, Christine

    2010-01-01

    During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…

  17. Simple Machine Junk Cars

    ERIC Educational Resources Information Center

    Herald, Christine

    2010-01-01

    During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…

  18. Machine Translation Project

    NASA Technical Reports Server (NTRS)

    Bajis, Katie

    1993-01-01

    The characteristics and capabilities of existing machine translation systems were examined and procurement recommendations were developed. Four systems, SYSTRAN, GLOBALINK, PC TRANSLATOR, and STYLUS, were determined to meet the NASA requirements for a machine translation system. Initially, four language pairs were selected for implementation. These are Russian-English, French-English, German-English, and Japanese-English.

  19. Stirling machine operating experience

    SciTech Connect

    Ross, B.; Dudenhoefer, J.E.

    1994-09-01

    Numerous Stirling machines have been built and operated, but the operating experience of these machines is not well known. It is important to examine this operating experience in detail, because it largely substantiates the claim that stirling machines are capable of reliable and lengthy operating lives. The amount of data that exists is impressive, considering that many of the machines that have been built are developmental machines intended to show proof of concept, and are not expected to operate for lengthy periods of time. Some Stirling machines (typically free-piston machines) achieve long life through non-contact bearings, while other Stirling machines (typically kinematic) have achieved long operating lives through regular seal and bearing replacements. In addition to engine and system testing, life testing of critical components is also considered. The record in this paper is not complete, due to the reluctance of some organizations to release operational data and because several organizations were not contacted. The authors intend to repeat this assessment in three years, hoping for even greater participation.

  20. Machine tool evaluation and machining operation development

    SciTech Connect

    Morris, T.O.; Kegg, R.

    1997-03-15

    The purpose of this CRADA was to support Cincinnati Milacron`s needs in fabricating precision components, from difficult to machine materials, while maintaining and enhancing the precision manufacturing skills of the Oak Ridge Complex. Oak Ridge and Cincinnati Milacron personnel worked in a team relationship wherein each contributed equally to the success of the program. Process characterization, control technologies, machine tool capabilities, and environmental issues were the primary focus areas. In general, Oak Ridge contributed a wider range of expertise in machine tool testing and monitoring, and environmental testing on machining fluids to the defined tasks while Cincinnati Milacron personnel provided equipment, operations-specific knowledge and shop-floor services to each task. Cincinnati Milacron was very pleased with the results of all of the CRADA tasks. However, some of the environmental tasks were not carried through to a desired completion due to an expanding realization of need as the work progressed. This expansion of the desired goals then exceeded the time length of the CRADA. Discussions are underway on continuing these tasks under either a Work for Others agreement or some alternate funding.

  1. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  2. TECHNICAL NOTE: Portable audio electronics for impedance-based measurements in microfluidics

    NASA Astrophysics Data System (ADS)

    Wood, Paul; Sinton, David

    2010-08-01

    We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1-50 mM), flow rate (2-120 µL min-1) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ~10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems.

  3. Audio representations of multi-channel EEG: a new tool for diagnosis of brain disorders

    PubMed Central

    Vialatte, François B; Dauwels, Justin; Musha, Toshimitsu; Cichocki, Andrzej

    2012-01-01

    Objective: The objective of this paper is to develop audio representations of electroencephalographic (EEG) multichannel signals, useful for medical practitioners and neuroscientists. The fundamental question explored in this paper is whether clinically valuable information contained in the EEG, not available from the conventional graphical EEG representation, might become apparent through audio representations. Methods and Materials: Music scores are generated from sparse time-frequency maps of EEG signals. Specifically, EEG signals of patients with mild cognitive impairment (MCI) and (healthy) control subjects are considered. Statistical differences in the audio representations of MCI patients and control subjects are assessed through mathematical complexity indexes as well as a perception test; in the latter, participants try to distinguish between audio sequences from MCI patients and control subjects. Results: Several characteristics of the audio sequences, including sample entropy, number of notes, and synchrony, are significantly different in MCI patients and control subjects (Mann-Whitney p < 0.01). Moreover, the participants of the perception test were able to accurately classify the audio sequences (89% correctly classified). Conclusions: The proposed audio representation of multi-channel EEG signals helps to understand the complex structure of EEG. Promising results were obtained on a clinical EEG data set. PMID:23383399

  4. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  5. Micro-machining.

    PubMed

    Brinksmeier, Ekkard; Preuss, Werner

    2012-08-28

    Manipulating bulk material at the atomic level is considered to be the domain of physics, chemistry and nanotechnology. However, precision engineering, especially micro-machining, has become a powerful tool for controlling the surface properties and sub-surface integrity of the optical, electronic and mechanical functional parts in a regime where continuum mechanics is left behind and the quantum nature of matter comes into play. The surprising subtlety of micro-machining results from the extraordinary precision of tools, machines and controls expanding into the nanometre range-a hundred times more precise than the wavelength of light. In this paper, we will outline the development of precision engineering, highlight modern achievements of ultra-precision machining and discuss the necessity of a deeper physical understanding of micro-machining. PMID:22802498

  6. Dictionary machine (for VLSI)

    SciTech Connect

    Ottmann, T.A.; Rosenberg, A.L.; Stockmeyer, L.J.

    1982-09-01

    The authors present the design of a dictionary machine that is suitable for VLSI implementation, and discusses how to realize this implementation efficiently. The machine supports the operations of search, insert, delete, and extractment on an arbitrary ordered set. Each of these operations takes time o(logn), where n is the number of entries present when the operation is performed. Moreover, arbitrary sequences of these instructions can be pipelined through the machine at a constant rate (i.e. independent of n and the capacity of the machine). The time o(logn) is an improvement over previous VLSI designs of dictionary machines which require time o(log n) per operation, where n is the maximum number of keys that can be stored. 10 references.

  7. 14. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific Railroad Carlin Shops, view to north (90mm lens). - Southern Pacific Railroad, Carlin Shops, Roundhouse Machine Shop Extension, Foot of Sixth Street, Carlin, Elko County, NV

  8. BRITISH MOLDING MACHINE, PBQ AUTOMATIC COPE AND DRAG MOLDING MACHINE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    BRITISH MOLDING MACHINE, PBQ AUTOMATIC COPE AND DRAG MOLDING MACHINE MAKES BOTH MOLD HALVES INDIVIDUALLY WHICH ARE LATER ROTATED, ASSEMBLED, AND LOWERED TO POURING CONVEYORS BY ASSISTING MACHINES. - Southern Ductile Casting Company, Casting, 2217 Carolina Avenue, Bessemer, Jefferson County, AL

  9. Method for Reading Sensors and Controlling Actuators Using Audio Interfaces of Mobile Devices

    PubMed Central

    Aroca, Rafael V.; Burlamaqui, Aquiles F.; Gonçalves, Luiz M. G.

    2012-01-01

    This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks. PMID:22438726

  10. Design and implementation of a two-way real-time communication system for audio over CATV networks

    NASA Astrophysics Data System (ADS)

    Cho, Choong Sang; Oh, Yoo Rhee; Lee, Young Han; Kim, Hong Kook

    2007-09-01

    In this paper, we design and implement a two-way real-time communication system for audio over cable television (CATV) networks to provide an audio-based interaction between the CATV broadcasting station and CATV subscribers. The two-way real-time communication system consists of a real-time audio encoding/decoding module, a payload formatter based on a transmission control protocol/Internet protocol (TCP/IP), and a cable network. At the broadcasting station, audio signals from a microphone are encoded by an audio codec that is implemented using a digital signal processor (DSP), where the MPEG-2 Layer II audio codec is used for the audio codec and TMS320C6416 is used for a DSP. Next, a payload formatter constructs a TCP/IP packet from an audio bitstream for transmission to a cable modem. Another payload formatter at the subscriber unpacks the TCP/IP packet decoded from the cable modem into audio bitstream. This bitstream is decoded by the MPEG-2 Layer II audio decoder. Finally the decoded audio signals are played out to the speaker. We confirmed that the system worked in real-time, with a measured delay of around 150 ms including the algorithmic and processing time delays.

  11. The Basic Anaesthesia Machine

    PubMed Central

    Gurudatt, CL

    2013-01-01

    After WTG Morton's first public demonstration in 1846 of use of ether as an anaesthetic agent, for many years anaesthesiologists did not require a machine to deliver anaesthesia to the patients. After the introduction of oxygen and nitrous oxide in the form of compressed gases in cylinders, there was a necessity for mounting these cylinders on a metal frame. This stimulated many people to attempt to construct the anaesthesia machine. HEG Boyle in the year 1917 modified the Gwathmey's machine and this became popular as Boyle anaesthesia machine. Though a lot of changes have been made for the original Boyle machine still the basic structure remains the same. All the subsequent changes which have been brought are mainly to improve the safety of the patients. Knowing the details of the basic machine will make the trainee to understand the additional improvements. It is also important for every practicing anaesthesiologist to have a thorough knowledge of the basic anaesthesia machine for safe conduct of anaesthesia. PMID:24249876

  12. Machine Learning and Radiology

    PubMed Central

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  13. Study of Decision Factors in Planning DBPH Audio Services.

    ERIC Educational Resources Information Center

    Kuipers, J. W.; Thorpe, R. W.

    A study was made to aid in planning for future operations of the "listening services" for the Blind and Physically Handicapped (DBPH), Library of Congress. Available data was gathered and organized. DBPH operations were analyzed as were economic models and functional factors for decision-making regarding recording media and playback machines. The…

  14. DNA-based machines.

    PubMed

    Wang, Fuan; Willner, Bilha; Willner, Itamar

    2014-01-01

    The base sequence in nucleic acids encodes substantial structural and functional information into the biopolymer. This encoded information provides the basis for the tailoring and assembly of DNA machines. A DNA machine is defined as a molecular device that exhibits the following fundamental features. (1) It performs a fuel-driven mechanical process that mimics macroscopic machines. (2) The mechanical process requires an energy input, "fuel." (3) The mechanical operation is accompanied by an energy consumption process that leads to "waste products." (4) The cyclic operation of the DNA devices, involves the use of "fuel" and "anti-fuel" ingredients. A variety of DNA-based machines are described, including the construction of "tweezers," "walkers," "robots," "cranes," "transporters," "springs," "gears," and interlocked cyclic DNA structures acting as reconfigurable catenanes, rotaxanes, and rotors. Different "fuels", such as nucleic acid strands, pH (H?/OH?), metal ions, and light, are used to trigger the mechanical functions of the DNA devices. The operation of the devices in solution and on surfaces is described, and a variety of optical, electrical, and photoelectrochemical methods to follow the operations of the DNA machines are presented. We further address the possible applications of DNA machines and the future perspectives of molecular DNA devices. These include the application of DNA machines as functional structures for the construction of logic gates and computing, for the programmed organization of metallic nanoparticle structures and the control of plasmonic properties, and for controlling chemical transformations by DNA machines. We further discuss the future applications of DNA machines for intracellular sensing, controlling intracellular metabolic pathways, and the use of the functional nanostructures for drug delivery and medical applications. PMID:24647836

  15. Machinability of Titanium Alloys

    NASA Astrophysics Data System (ADS)

    Rahman, Mustafizur; Wong, Yoke San; Zareena, A. Rahmath

    Titanium and its alloys find wide application in many industries because of their excellent and unique combination of high strength-to-weight ratio and high resistance to corrosion. The machinability of titanium and its alloys is impaired by its high chemical reactivity, low modulus of elasticity and low thermal conductivity. A number of literatures on machining of titanium alloys with conventional tools and advanced cutting tool materials is reviewed. The results obtained from the study on high speed machining of Ti-6Al-4V alloys with cubic boron nitride (CBN), binderless cubic boron nitride (BCBN) and polycrystalline diamond (PCD) are also summarized.

  16. The pendulum wave machine

    NASA Astrophysics Data System (ADS)

    Zetie, K. P.

    2015-05-01

    There are many examples on the internet of videos of ‘pendulum wave machines’ and how to make them (for example, www.instructables.com/id/Wave-Pendulum/). The machine is simply a set of pendula of different lengths which, when viewed end on, produce wave-like patterns from the positions of the bobs. These patterns change with time, with new patterns emerging as the bobs change phase. In this article, the physics of the machine is explored and explained, along with tips on how to build such a device.

  17. Machine tools get smarter

    SciTech Connect

    Valenti, M.

    1995-11-01

    This article describes how, using software, sensors, and controllers, a new generation of intelligent machine tools are optimizing grinding, milling, and molding processes. A paradox of manufacturing parts is that the faster the parts are made, the less accurate they are--and vice versa. However, a combination of software, sensors, controllers, and mechanical innovations are being used to create a new generation of intelligent machine tools capable of optimizing their own grinding, milling, and molding processes. These brainy tools are allowing manufacturers to machine more-complex, higher-quality parts in shorter cycle times. The technology also lowers scrap rates and reduces or eliminates the need for polishing inadequately finished parts.

  18. Machine Tool Software

    NASA Technical Reports Server (NTRS)

    1988-01-01

    A NASA-developed software package has played a part in technical education of students who major in Mechanical Engineering Technology at William Rainey Harper College. Professor Hack has been using (APT) Automatically Programmed Tool Software since 1969 in his CAD/CAM Computer Aided Design and Manufacturing curriculum. Professor Hack teaches the use of APT programming languages for control of metal cutting machines. Machine tool instructions are geometry definitions written in APT Language to constitute a "part program." The part program is processed by the machine tool. CAD/CAM students go from writing a program to cutting steel in the course of a semester.

  19. OPTICAM machine design

    NASA Astrophysics Data System (ADS)

    Liedes, Jyrki T.

    1992-01-01

    Rank Pneumo has worked with the Center of Optics Manufacturing to design a multiple-axis flexible machining center for spherical lens fabrication. The OPTICAM/SM prototype machine has been developed in cooperation with the Center's Manufacturing Advisory Board. The SM will generate, fine grind, pre-polish, and center a spherical lens surface in one setup sequence. Unique features of the design incorporate machine resident metrology to provide RQM (Real-time Quality Management) and closed-loop feedback control that corrects for lens thickness, diameter, and centering error. SPC (Statistical Process Control) software can compensate for process drift and QA data collection is provided without additional labor.

  20. Machine phase fullerene nanotechnology

    NASA Astrophysics Data System (ADS)

    Globus, Al; Bauschlicher, Charles W., Jr.; Han, Jie; Jaffe, Richard L.; Levit, Creon; Srivastava, Deepak

    1998-09-01

    Recent advances in fullerene science and technology suggest that it may be possible, in the distant future, to design and build atomically precise programmable machines composed largely of functionalized fullerenes. Large numbers of such machines with appropriate interconnections could conceivably create a material able to react to the environment and repair itself. This paper reviews some of the experimental and theoretical work relating to these materials, sometimes called machine phase, including the fullerene gears and high-density memory recently designed and simulated in our laboratory.

  1. Effects of audio-visual stimulation on the incidence of restraint ulcers on the Wistar rat

    NASA Technical Reports Server (NTRS)

    Martin, M. S.; Martin, F.; Lambert, R.

    1979-01-01

    The role of sensory simulation in restrained rats was investigated. Both mixed audio-visual and pure sound stimuli, ineffective in themselves, were found to cause a significant increase in the incidence of restraint ulcers in the Wistar Rat.

  2. Worldwide survey of direct-to-listener digital audio delivery systems development since WARC-1992

    NASA Technical Reports Server (NTRS)

    Messer, Dion D.

    1993-01-01

    Each country was allocated frequency band(s) for direct-to-listener digital audio broadcasting at WARC-92. These allocations were near 1500, 2300, and 2600 MHz. In addition, some countries are encouraging the development of digital audio broadcasting services for terrestrial delivery only in the VHF bands (at frequencies from roughly 50 to 300 MHz) and in the medium-wave broadcasting band (AM band) (from roughly 0.5 to 1.7 MHz). The development activity increase was explosive. Current development, as of February 1993, as it is known to the author is summarized. The information given includes the following characteristics, as appropriate, for each planned system: coverage areas, audio quality, number of audio channels, delivery via satellite/terrestrial or both, carrier frequency bands, modulation methods, source coding, and channel coding. Most proponents claim that they will be operational in 3 or 4 years.

  3. Improvements of ModalMax High-Fidelity Piezoelectric Audio Device

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.

    2005-01-01

    ModalMax audio speakers have been enhanced by innovative means of tailoring the vibration response of thin piezoelectric plates to produce a high-fidelity audio response. The ModalMax audio speakers are 1 mm in thickness. The device completely supplants the need to have a separate driver and speaker cone. ModalMax speakers can perform the same applications of cone speakers, but unlike cone speakers, ModalMax speakers can function in harsh environments such as high humidity or extreme wetness. New design features allow the speakers to be completely submersed in salt water, making them well suited for maritime applications. The sound produced from the ModalMax audio speakers has sound spatial resolution that is readily discernable for headset users.

  4. Learning one-to-many mapping functions for audio-visual integrated perception

    NASA Astrophysics Data System (ADS)

    Lim, Jung-Hui; Oh, Do-Kwan; Lee, Soo-Young

    2010-04-01

    In noisy environment the human speech perception utilizes visual lip-reading as well as audio phonetic classification. This audio-visual integration may be done by combining the two sensory features at the early stage. Also, the top-down attention may integrate the two modalities. For the sensory feature fusion we introduce mapping functions between the audio and visual manifolds. Especially, we present an algorithm to provide one-to-many mapping function for the videoto- audio mapping. The top-down attention is also presented to integrate both the sensory features and classification results of both modalities, which is able to explain McGurk effect. Each classifier is separately implemented by the Hidden-Markov Model (HMM), but the two classifiers are combined at the top level and interact by the top-down attention.

  5. Magazine Production: A Selected, Annotated Bibliography of Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Applegate, Edd

    This bibliography, which contains 13 annotations, is designed to help instructors choose appropriate audio-visual materials for a course in magazine production. Names and addresses of institutions from which the materials may be secured have been included. (MS)

  6. Frequency allocations for a new satellite service - Digital audio broadcasting

    NASA Technical Reports Server (NTRS)

    Reinhart, Edward E.

    1992-01-01

    The allocation in the range 500-3000 MHz for digital audio broadcasting (DAB) is described in terms of key issues such as the transmission-system architectures. Attention is given to the optimal amount of spectrum for allocation and the technological considerations relevant to downlink bands for satellite and terrestrial transmissions. Proposals for DAB allocations are compared, and reference is made to factors impinging on the provision of ground/satellite feeder links. The allocation proposals describe the implementation of 50-60-MHz bandwidths for broadcasting in the ranges near 800 MHz, below 1525 MHz, near 2350 MHz, and near 2600 MHz. Three specific proposals are examined in terms of characteristics such as service areas, coverage/beam, channels/satellite beam, and FCC license status. Several existing problems are identified including existing services crowded with systems, the need for new bands in the 1000-3000-MHz range, and variations in the nature and intensity of implementations of existing allocations that vary from country to country.

  7. Information optimization in coupled audio-visual cortical maps.

    PubMed

    Kardar, Mehran; Zee, A

    2002-12-10

    Barn owls hunt in the dark by using cues from both sight and sound to locate their prey. This task is facilitated by topographic maps of the external space formed by neurons (e.g., in the optic tectum) that respond to visual or aural signals from a specific direction. Plasticity of these maps has been studied in owls forced to wear prismatic spectacles that shift their visual field. Adaptive behavior in young owls is accompanied by a compensating shift in the response of (mapped) neurons to auditory signals. We model the receptive fields of such neurons by linear filters that sample correlated audio-visual signals and search for filters that maximize the gathered information while subject to the costs of rewiring neurons. Assuming a higher fidelity of visual information, we find that the corresponding receptive fields are robust and unchanged by artificial shifts. The shape of the aural receptive field, however, is controlled by correlations between sight and sound. In response to prismatic glasses, the aural receptive fields shift in the compensating direction, although their shape is modified due to the costs of rewiring. PMID:12446848

  8. Breathing rate estimation during sleep using audio signal analysis.

    PubMed

    Dafna, E; Rosenwein, T; Tarasiuk, A; Zigel, Y

    2015-08-01

    Sleep is associated with important changes in respiratory rate and ventilation. Currently, breathing rate (BR) is measured during sleep using an array of contact and wearable sensors, including airflow sensors and respiratory belts; there is need for a simplified and more comfortable approach to monitor respiration. Here, we present a new method for BR evaluation during sleep using a non-contact microphone. The basic idea behind this approach is that during sleep the upper airway becomes narrower due to muscle relaxation, which leads to louder breathing sounds that can be captured via ambient microphone. In this study we developed a signal processing algorithm that emphasizes breathing sounds, extracts breathing-related features, and estimates BR during sleep. A comparison between audio-based BR estimation and BR calculated using the traditional (gold-standard) respiratory belts during in-laboratory polysomnography (PSG) study was performed on 204 subjects. Pearson's correlation between subjects' averaged BR of the two approaches was R=0.97. Epoch-by-epoch (30 s) BR comparison revealed a mean relative error of 2.44% and Pearson's correlation of 0.68. This study shows reliable and promising results for non-contact BR estimation. PMID:26737654

  9. Interactive video audio system: communication server for INDECT portal

    NASA Astrophysics Data System (ADS)

    Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem

    2014-05-01

    The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.

  10. The effect of reverberation on personal audio devices.

    PubMed

    Simón-Gálvez, Marcos F; Elliott, Stephen J; Cheer, Jordan

    2014-05-01

    Personal audio refers to the creation of a listening zone within which a person, or a group of people, hears a given sound program, without being annoyed by other sound programs being reproduced in the same space. Generally, these different sound zones are created by arrays of loudspeakers. Although these devices have the capacity to achieve different sound zones in an anechoic environment, they are ultimately used in normal rooms, which are reverberant environments. At high frequencies, reflections from the room surfaces create a diffuse pressure component which is uniform throughout the room volume and thus decreases the directional characteristics of the device. This paper shows how the reverberant performance of an array can be modeled, knowing the anechoic performance of the radiator and the acoustic characteristics of the room. A formulation is presented whose results are compared to practical measurements in reverberant environments. Due to reflections from the room surfaces, pressure variations are introduced in the transfer responses of the array. This aspect is assessed by means of simulations where random noise is added to create uncertainties, and by performing measurements in a real environment. These results show how the robustness of an array is increased when it is designed for use in a reverberant environment. PMID:24815249

  11. Audio Effects Based on Biorthogonal Time-Varying Frequency Warping

    NASA Astrophysics Data System (ADS)

    Evangelista, Gianpaolo; Cavaliere, Sergio

    2001-12-01

    We illustrate the mathematical background and musical use of a class of audio effects based on frequency warping. These effects alter the frequency content of a signal via spectral mapping. They can be implemented in dispersive tapped delay lines based on a chain of all-pass filters. In a homogeneous line with first-order all-pass sections, the signal formed by the output samples at a given time is related to the input via the Laguerre transform. However, most musical signals require a time-varying frequency modification in order to be properly processed. Vibrato in musical instruments or voice intonation in the case of vocal sounds may be modeled as small and slow pitch variations. Simulation of these effects requires techniques for time-varying pitch and/or brightness modification that are very useful for sound processing. The basis for time-varying frequency warping is a time-varying version of the Laguerre transformation. The corresponding implementation structure is obtained as a dispersive tapped delay line, where each of the frequency dependent delay element has its own phase response. Thus, time-varying warping results in a space-varying, inhomogeneous, propagation structure. We show that time-varying frequency warping is associated to an expansion over biorthogonal sets generalizing the discrete Laguerre basis. Slow time-varying characteristics lead to slowly varying parameter sequences. The corresponding sound transformation does not suffer from discontinuities typical of delay lines based on unit delays.

  12. Sensorimotor synchronization with audio-visual stimuli: limited multisensory integration.

    PubMed

    Armstrong, Alan; Issartel, Johann

    2014-11-01

    Understanding how we synchronize our actions with stimuli from different sensory modalities plays a central role in helping to establish how we interact with our multisensory environment. Recent research has shown better performance with multisensory over unisensory stimuli; however, the type of stimuli used has mainly been auditory and tactile. The aim of this article was to expand our understanding of sensorimotor synchronization with multisensory audio-visual stimuli and compare these findings to their individual unisensory counterparts. This research also aims to assess the role of spatio-temporal structure for each sensory modality. The visual and/or auditory stimuli had either temporal or spatio-temporal information available and were presented to the participants in unimodal and bimodal conditions. Globally, the performance was significantly better for the bimodal compared to the unimodal conditions; however, this benefit was limited to only one of the bimodal conditions. In terms of the unimodal conditions, the level of synchronization with visual stimuli was better than auditory, and while there was an observed benefit with the spatio-temporal compared to temporal visual stimulus, this was not replicated with the auditory stimulus. PMID:25027792

  13. Human performance measures for interactive haptic-audio-visual interfaces.

    PubMed

    Jia, Dawei; Bhatti, Asim; Nahavandi, Saeid; Horan, Ben

    2013-01-01

    Virtual reality and simulation are becoming increasingly important in modern society and it is essential to improve our understanding of system usability and efficacy from the users' perspective. This paper introduces a novel evaluation method designed to assess human user capability when undertaking technical and procedural training using virtual training systems. The evaluation method falls under the user-centered design and evaluation paradigm and draws on theories of cognitive, skill-based and affective learning outcomes. The method focuses on user interaction with haptic-audio-visual interfaces and the complexities related to variability in users' performance, and the adoption and acceptance of the technologies. A large scale user study focusing on object assembly training tasks involving selecting, rotating, releasing, inserting, and manipulating three-dimensional objects was performed. The study demonstrated the advantages of the method in obtaining valuable multimodal information for accurate and comprehensive evaluation of virtual training system efficacy. The study investigated how well users learn, perform, adapt to, and perceive the virtual training. The results of the study revealed valuable aspects of the design and evaluation of virtual training systems contributing to an improved understanding of more usable virtual training systems. PMID:24808267

  14. Advanced BCD technology for automotive, audio and power applications

    NASA Astrophysics Data System (ADS)

    Wessels, Piet; Swanenberg, Maarten; van Zwol, Hans; Krabbenborg, Benno; Boezen, Henk; Berkhout, Marco; Grakist, Alfred

    2007-02-01

    NXP's family of SOI-based advanced bipolar CMOS DMOS (A-BCD) technologies is presented. The technology is very successful in automotive, audio and power applications. This paper introduces the technology, the device concepts and the applications. The advantage of BCD technology on SOI is in the ability to have all devices fully dielectrically isolated. This enables various device-biasing conditions (like high side or below substrate voltage), which are not easy to realise on bulk. This creates competitive advantage in the mentioned applications. As an example this enables extreme robust EMC (electro magnetic compatibility) and EMI (electro magnetic immunity) circuitry for CAN (controlled area network), or LIN (local interconnect network) transceivers in automotive. The leakage currents of the devices are much lower compared to bulk. The same holds for parasitic capacitances towards the substrate. LIGBT's can be built without suffering from minority carriers being injected into the substrate. The area of power devices is in general very small due to the usage of the double Resurf principle and trench isolation. This small area pays off for high voltage analogue circuits. Special topics on self-heating and ESD are being treated, where it is demonstrated that performance is comparable to bulk. Three applications where SOI based BCD generates a functionality advantage, are being explained. The SOI based technology is an excellent starting point for development of future products were monolithic solutions can be built with embedded power or even embedded MEMs technology.

  15. Audio-visual assistance in co-creating transition knowledge

    NASA Astrophysics Data System (ADS)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  16. Effectiveness and Comparison of Various Audio Distraction Aids in Management of Anxious Dental Paediatric Patients

    PubMed Central

    Johri, Nikita; Khan, Suleman Abbas; Singh, Rahul Kumar; Chadha, Dheera; Navit, Pragati; Sharma, Anshul; Bahuguna, Rachana

    2015-01-01

    Background Dental anxiety is a widespread phenomenon and a concern for paediatric dentistry. The inability of children to deal with threatening dental stimuli often manifests as behaviour management problems. Nowadays, the use of non-aversive behaviour management techniques is more advocated, which are more acceptable to parents, patients and practitioners. Therefore, this present study was conducted to find out which audio aid was the most effective in the managing anxious children. Aims and Objectives The aim of the present study was to compare the efficacy of audio-distraction aids in reducing the anxiety of paediatric patients while undergoing various stressful and invasive dental procedures. The objectives were to ascertain whether audio distraction is an effective means of anxiety management and which type of audio aid is the most effective. Materials and Methods A total number of 150 children, aged between 6 to 12 years, randomly selected amongst the patients who came for their first dental check-up, were placed in five groups of 30 each. These groups were the control group, the instrumental music group, the musical nursery rhymes group, the movie songs group and the audio stories group. The control group was treated under normal set-up & audio group listened to various audio presentations during treatment. Each child had four visits. In each visit, after the procedures was completed, the anxiety levels of the children were measured by the Venham’s Picture Test (VPT), Venham’s Clinical Rating Scale (VCRS) and pulse rate measurement with the help of pulse oximeter. Results A significant difference was seen between all the groups for the mean pulse rate, with an increase in subsequent visit. However, no significant difference was seen in the VPT & VCRS scores between all the groups. Audio aids in general reduced anxiety in comparison to the control group, and the most significant reduction in anxiety level was observed in the audio stories group. Conclusion The conclusion derived from the present study was that audio distraction was effective in reducing anxiety and audio-stories were the most effective. PMID:26816984

  17. Data Machine Independence

    Energy Science and Technology Software Center (ESTSC)

    1994-12-30

    Data-machine independence achieved by using four technologies (ASN.1, XDR, SDS, and ZEBRA) has been evaluated by encoding two different applications in each of the above; and their results compared against the standard programming method using C.

  18. Tunnel boring machine

    SciTech Connect

    Snyder, L. L.

    1985-07-09

    A tunnel boring machine for controlled boring of a curvilinear tunnel including a rotating cutter wheel mounted on the forward end of a thrust cylinder assembly having a central longitudinal axis aligned with the cutter wheel axis of rotation; the thrust cylinder assembly comprising a cylinder barrel and an extendable and retractable thrust arm received therein. An anchoring assembly is pivotally attached to the rear end of the cylinder barrel for anchoring the machine during a cutting stroke and providing a rear end pivot axis during curved cutting strokes. A pair of laterally extending, extendable and retractable arms are fixedly mounted at a forward portion of the cylinder barrel for providing lateral displacement in a laterally curved cutting mode and for anchoring the machine between cutting strokes and during straight line boring. Forward and rear transverse displacement and support assemblies are provided to facilitate cutting in a transversely curved cutting mode and to facilitate machine movement between cutting strokes.

  19. Zigzags in Turing Machines

    NASA Astrophysics Data System (ADS)

    Gajardo, Anahí; Guillon, Pierre

    We study one-head machines through symbolic and topological dynamics. In particular, a subshift is associated to the subshift, and we are interested in its complexity in terms of realtime recognition. We emphasize the class of one-head machines whose subshift can be recognized by a deterministic pushdown automaton. We prove that this class corresponds to particular restrictions on the head movement, and to equicontinuity in associated dynamical systems.

  20. Sealing intersecting vane machines

    DOEpatents

    Martin, Jedd N.; Chomyszak, Stephen M.

    2005-06-07

    The invention provides a toroidal intersecting vane machine incorporating intersecting rotors to form primary and secondary chambers whose porting configurations minimize friction and maximize efficiency. Specifically, it is an object of the invention to provide a toroidal intersecting vane machine that greatly reduces the frictional losses through intersecting surfaces without the need for external gearing by modifying the width of one or both tracks at the point of intermeshing. The inventions described herein relate to these improvements.

  1. Sealing intersecting vane machines

    DOEpatents

    Martin, Jedd N. (Providence, RI); Chomyszak, Stephen M. (Attleboro, MA)

    2007-06-05

    The invention provides a toroidal intersecting vane machine incorporating intersecting rotors to form primary and secondary chambers whose porting configurations minimize friction and maximize efficiency. Specifically, it is an object of the invention to provide a toroidal intersecting vane machine that greatly reduces the frictional losses through intersecting surfaces without the need for external gearing by modifying the width of one or both tracks at the point of intermeshing. The inventions described herein relate to these improvements.

  2. Human-machine interactions

    DOEpatents

    Forsythe, J. Chris; Xavier, Patrick G.; Abbott, Robert G.; Brannon, Nathan G.; Bernard, Michael L.; Speed, Ann E.

    2009-04-28

    Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

  3. Flexible machining systems described

    NASA Astrophysics Data System (ADS)

    Butters, H. J.

    1985-03-01

    The rationalization and gradual automation of short rotationally symmetric parts in the Saalfeld VEB Machine Tool Factory was carried out in three stages: (1) part-specific manufacturing; (2) automated production line for manufacturing toothed gears; and (3) automated manufacturing section for short rotationally symmetric parts. The development of numerically controlled machine tools and of industrial robot technology made possible automated manufacturing. The design of current facilities is explored, manufacturing control is examined, experience is reported.

  4. Doubly fed induction machine

    DOEpatents

    Skeist, S. Merrill; Baker, Richard H.

    2005-10-11

    An electro-mechanical energy conversion system coupled between an energy source and an energy load including an energy converter device having a doubly fed induction machine coupled between the energy source and the energy load to convert the energy from the energy source and to transfer the converted energy to the energy load and an energy transfer multiplexer coupled to the energy converter device to control the flow of power or energy through the doubly fed induction machine.

  5. Metalworking and machining fluids

    SciTech Connect

    Erdemir, Ali; Sykora, Frank; Dorbeck, Mark

    2010-10-12

    Improved boron-based metal working and machining fluids. Boric acid and boron-based additives that, when mixed with certain carrier fluids, such as water, cellulose and/or cellulose derivatives, polyhydric alcohol, polyalkylene glycol, polyvinyl alcohol, starch, dextrin, in solid and/or solvated forms result in improved metalworking and machining of metallic work pieces. Fluids manufactured with boric acid or boron-based additives effectively reduce friction, prevent galling and severe wear problems on cutting and forming tools.

  6. Could a machine think

    SciTech Connect

    Churchland, P.M.; Churchland, P.S. )

    1990-01-01

    There are many reasons for saying yes. One of the earliest and deepest reason lay in two important results in computational theory. The first was Church's thesis, which states that every effectively computable function is recursively computable. The second important result was Alan M. Turing's demonstration that any recursively computable function can be computed in finite time by a maximally simple sort of symbol-manipulating machine that has come to be called a universal Turing machine. This machine is guided by a set of recursively applicable rules that are sensitive to the identity, order and arrangement of the elementary symbols it encounters as input. The authors reject the Turing test as a sufficient condition for conscious intelligence. They base their position of the specific behavioral failures of the classical SM machines and on the specific virtues of machines with a more brain-like architecture. These contrasts show that certain computational strategies have vast and decisive advantages over others where typical cognitive tasks are concerned, advantages that are empirically inescapable. Clearly, the brain is making systematic use of these computational advantage. But it need not be the only physical system capable of doing so. Artificial intelligence, in a nonbiological but massively parallel machine, remain a compelling and discernible prospect.

  7. 16. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific Railroad Carlin Shops, view to south (90mm lens). Note the large segmental-arched doorway to move locomotives in and out of Machine Shop. - Southern Pacific Railroad, Carlin Shops, Roundhouse Machine Shop Extension, Foot of Sixth Street, Carlin, Elko County, NV

  8. An Audio Architecture Integrating Sound and Live Voice for Virtual Environments

    NASA Astrophysics Data System (ADS)

    Krebs, Eric M.

    2002-09-01

    The purpose behind this thesis was to design and implement audio system architecture, both in hardware and in software, for use in virtual environments The hardware and software design requirements were aimed at implementing acoustical models, such as reverberation and occlusion, and live audio streaming to any simulation employing this architecture, Several free or open-source sound APIs were evaluated, and DirectSound3DTM was selected as the core component of the audio architecture, Creative Technology Ltd, Environmental Audio Extensions (EAXTM 3,0) were integrated into the architecture to provide environmental effects such as reverberation, occlusion, obstruction, and exclusion, Voice over IP (VoIP) technology was evaluated to provide live, streaming voice to any virtual environment DirectVoice was selected as the voice component of the VoIP architecture due to its integration with DirectSound3DTM, However, extremely high latency considerations with DirectVoice, and any other VoIP application or software, required further research into alternative live voice architectures for inclusion in virtual environments Ausim3D's GoldServe Audio System was evaluated and integrated into the hardware component of the audio architecture to provide an extremely low-latency, live, streaming voice capability.

  9. Audio-vocal interaction in single neurons of the monkey ventrolateral prefrontal cortex.

    PubMed

    Hage, Steffen R; Nieder, Andreas

    2015-05-01

    Complex audio-vocal integration systems depend on a strong interconnection between the auditory and the vocal motor system. To gain cognitive control over audio-vocal interaction during vocal motor control, the PFC needs to be involved. Neurons in the ventrolateral PFC (VLPFC) have been shown to separately encode the sensory perceptions and motor production of vocalizations. It is unknown, however, whether single neurons in the PFC reflect audio-vocal interactions. We therefore recorded single-unit activity in the VLPFC of rhesus monkeys (Macaca mulatta) while they produced vocalizations on command or passively listened to monkey calls. We found that 12% of randomly selected neurons in VLPFC modulated their discharge rate in response to acoustic stimulation with species-specific calls. Almost three-fourths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of vocalization. Based on these audio-vocal interactions, the VLPFC might be well positioned to combine higher order auditory processing with cognitive control of the vocal motor output. Such audio-vocal integration processes in the VLPFC might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. PMID:25948255

  10. StirMark Benchmark: audio watermarking attacks based on lossy compression

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  11. Expansion Techniques of Embedding Audio Watermark Data Rate for Constructing Ubiquitous Acoustic Spaces

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are proposing “Ubiquitous Acoustic Spaces”, where each sound source can emit some address information with audio signals and make us automatically access to its related cyber space, using handheld devices such as cellphones. In order to realize this concept, we have considered three types of extraction methods, which were an acoustic modulation, an audio fingerprint, and an audio watermark technique. Then we have proposed a novel audio watermarking technique, which enables contactless asynchronous detection of embedded audio watermarks through speaker and microphone devices. However its embedding data rate was around 10 [bps], which was not sufficient for embedding generally used URL address texts. Therefore, we have extended the embedding frequency range and proposed a duplicated embedding algorithm, which uses both previously proposed frequency division method and temporal division method together. By these improvements, possible embedding data rate could be extended to 61.5 [bps], and we could extract watermarks through public telephone networks, even from a cell phone sound source. In this paper, we describe abstracts of our improved watermark embedding and extracting algorithms, and experimental results of watermark extraction precision on several audio signal capturing conditions.

  12. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    PubMed Central

    Alm, Magnus; Behne, Dawn

    2015-01-01

    Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274

  13. [A new machinability test machine and the machinability of composite resins for core built-up].

    PubMed

    Iwasaki, N

    2001-06-01

    A new machinability test machine especially for dental materials was contrived. The purpose of this study was to evaluate the effects of grinding conditions on machinability of core built-up resins using this machine, and to confirm the relationship between machinability and other properties of composite resins. The experimental machinability test machine consisted of a dental air-turbine handpiece, a control weight unit, a driving unit of the stage fixing the test specimen, and so on. The machinability was evaluated as the change in volume after grinding using a diamond point. Five kinds of core built-up resins and human teeth were used in this study. The machinabilities of these composite resins increased with an increasing load during grinding, and decreased with repeated grinding. There was no obvious correlation between the machinability and Vickers' hardness; however, a negative correlation was observed between machinability and scratch width. PMID:11496409

  14. Non-traditional machining techniques

    SciTech Connect

    Day, Robert D; Fierro, Frank; Garcia, Felix P; Hatch, Douglass J; Randolph, Randall B; Reardon, Patrick T; Rivera, Gerald

    2008-01-01

    During the course of machining targets for various experiments it sometimes becomes necessary to adapt fixtures or machines, which are designed for one function, to another function. When adapting a machine or fixture is not adequate, it may be necessary to acquire a machine specifically designed to produce the component required. In addition to the above scenarios, the features of a component may dictate that multi-step machining processes are necessary to produce the component. This paper discusses the machining of four components where adaptation, specialized machine design, or multi-step processes were necessary to produce the components.

  15. Machining of fiber reinforced composites

    NASA Astrophysics Data System (ADS)

    Komanduri, Ranga; Zhang, Bi; Vissa, Chandra M.

    Factors involved in machining of fiber-reinforced composites are reviewed. Consideration is given to properties of composites reinforced with boron filaments, glass fibers, aramid fibers, carbon fibers, and silicon carbide fibers and to polymer (organic) matrix composites, metal matrix composites, and ceramic matrix composites, as well as to the processes used in conventional machining of boron-titanium composites and of composites reinforced by each of these fibers. Particular attention is given to the methods of nonconventional machining, such as laser machining, water jet cutting, electrical discharge machining, and ultrasonic assisted machining. Also discussed are safety precautions which must be taken during machining of fiber-containing composites.

  16. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    PubMed

    Schwartz, Jean-Luc; Savariaux, Christophe

    2014-07-01

    An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction. PMID:25079216

  17. Small Weakly Universal Turing Machines

    NASA Astrophysics Data System (ADS)

    Neary, Turlough; Woods, Damien

    We give small universal Turing machines with state-symbol pairs of (6,2), (3,3) and (2,4). These machines are weakly universal, which means that they have an infinitely repeated word to the left of their input and another to the right. They simulate Rule 110 and are currently the smallest known weakly universal Turing machines. Despite their small size these machines are efficient polynomial time simulators of Turing machines.

  18. The Bearingless Electrical Machine

    NASA Technical Reports Server (NTRS)

    Bichsel, J.

    1992-01-01

    Electromagnetic bearings allow the suspension of solids. For rotary applications, the most important physical effect is the force of a magnetic circuit to a high permeable armature, called the MAXWELL force. Contrary to the commonly used MAXWELL bearings, the bearingless electrical machine will take advantage of the reaction force of a conductor carrying a current in a magnetic field. This kind of force, called Lorentz force, generates the torque in direct current, asynchronous and synchronous machines. The magnetic field, which already exists in electrical machines and helps to build up the torque, can also be used for the suspension of the rotor. Besides the normal winding of the stator, a special winding was added, which generates forces for levitation. So a radial bearing, which is integrated directly in the active part of the machine, and the motor use the laminated core simultaneously. The winding was constructed for the levitating forces in a special way so that commercially available standard ac inverters for drives can be used. Besides wholly magnetic suspended machines, there is a wide range of applications for normal drives with ball bearings. Resonances of the rotor, especially critical speeds, can be damped actively.

  19. Extreme ultraviolet lithography machine

    DOEpatents

    Tichenor, Daniel A.; Kubiak, Glenn D.; Haney, Steven J.; Sweeney, Donald W.

    2000-01-01

    An extreme ultraviolet lithography (EUVL) machine or system for producing integrated circuit (IC) components, such as transistors, formed on a substrate. The EUVL machine utilizes a laser plasma point source directed via an optical arrangement onto a mask or reticle which is reflected by a multiple mirror system onto the substrate or target. The EUVL machine operates in the 10-14 nm wavelength soft x-ray photon. Basically the EUV machine includes an evacuated source chamber, an evacuated main or project chamber interconnected by a transport tube arrangement, wherein a laser beam is directed into a plasma generator which produces an illumination beam which is directed by optics from the source chamber through the connecting tube, into the projection chamber, and onto the reticle or mask, from which a patterned beam is reflected by optics in a projection optics (PO) box mounted in the main or projection chamber onto the substrate. In one embodiment of a EUVL machine, nine optical components are utilized, with four of the optical components located in the PO box. The main or projection chamber includes vibration isolators for the PO box and a vibration isolator mounting for the substrate, with the main or projection chamber being mounted on a support structure and being isolated.

  20. Detection of emetic activity in the cat by monitoring venous pressure and audio signals

    NASA Technical Reports Server (NTRS)

    Nagahara, A.; Fox, Robert A.; Daunton, Nancy G.; Elfar, S.

    1991-01-01

    To investigate the use of audio signals as a simple, noninvasive measure of emetic activity, the relationship between the somatic events and sounds associated with retching and vomiting was studied. Thoracic venous pressure obtained from an implanted external jugular catheter was shown to provide a precise measure of the somatic events associated with retching and vomiting. Changes in thoracic venous pressure monitored through an indwelling external jugular catheter with audio signals, obtained from a microphone located above the animal in a test chamber, were compared. In addition, two independent observers visually monitored emetic episodes. Retching and vomiting were induced by injection of xylazine (0.66mg/kg s.c.), or by motion. A unique audio signal at a frequency of approximately 250 Hz is produced at the time of the negative thoracic venous pressure change associated with retching. Sounds with higher frequencies (around 2500 Hz) occur in conjunction with the positive pressure changes associated with vomiting. These specific signals could be discriminated reliably by individuals reviewing the audio recordings of the sessions. Retching and those emetic episodes associated with positive venous pressure changes were detected accurately by audio monitoring, with 90 percent of retches and 100 percent of emetic episodes correctly identified. Retching was detected more accurately (p is less than .05) by audio monitoring than by direct visual observation. However, with visual observation a few incidents in which stomach contents were expelled in the absence of positive pressure changes or detectable sounds were identified. These data suggest that in emetic situations, the expulsion of stomach contents may be accomplished by more than one neuromuscular system and that audio signals can be used to detect emetic episodes associated with thoracic venous pressure changes.

  1. Monitoring frog communities: An application of machine learning

    SciTech Connect

    Taylor, A.; Watson, G.; Grigg, G.; McCallum, H.

    1996-12-31

    Automatic recognition of animal vocalizations would be a valuable tool for a variety of biological research and environmental monitoring applications. We report the development of a software system which can recognize the vocalizations of 22 species of frogs which occur in an area of northern Australia. This software system will be used in unattended operation to monitor the effect on frog populations of the introduced Cane Toad. The system is based around classification of local peaks in the spectrogram of the audio signal using Quinlan`s machine learning system, C4.5. Unreliable identifications of peaks are aggregated together using a hierarchical structure of segments based on the typical temporal vocalization species` patterns. This produces robust system performance.

  2. Micro-machined resonator

    DOEpatents

    Godshall, N.A.; Koehler, D.R.; Liang, A.Y.; Smith, B.K.

    1993-03-30

    A micro-machined resonator, typically quartz, with upper and lower micro-machinable support members, or covers, having etched wells which may be lined with conductive electrode material, between the support members is a quartz resonator having an energy trapping quartz mesa capacitively coupled to the electrode through a diaphragm; the quartz resonator is supported by either micro-machined cantilever springs or by thin layers extending over the surfaces of the support. If the diaphragm is rigid, clock applications are available, and if the diaphragm is resilient, then transducer applications can be achieved. Either the thin support layers or the conductive electrode material can be integral with the diaphragm. In any event, the covers are bonded to form a hermetic seal and the interior volume may be filled with a gas or may be evacuated. In addition, one or both of the covers may include oscillator and interface circuitry for the resonator.

  3. The Bateman Flotation Machine

    SciTech Connect

    Bezuidenhout, G.

    1995-12-31

    The newly developed Bateman Flotation Machine has proven its versatility in roughing and cleaning flotation circuits. This mechanical flotation machine has the dual performance capability of suspending solids and dispersing air at relatively low power inputs without compromising these two important fundamentals. This new development has been successfully marketed to a wide cross section of concentrator mineral processes. The mechanical design of the flotation mechanism has been optimized to reduce operational costs and to lower manufacturing costs. Production process environments were utilized for verification of the scale-up of each cell volume size rated mechanism. These thorough investigations produced performance data which could be accurately quoted. This paper is a historical account of the Batement Flotation Machine. Technical details of the development are covered with descriptions of the operational applications.

  4. Constructing Time Machines

    NASA Astrophysics Data System (ADS)

    Shore, G. M.

    The existence of time machines, understood as space time constructions exhibiting physically realised closed timelike curves (CTC's), would raise fundamental problems with causality and challenge our current understanding of classical and quantum theories of gravity. In this paper, we investigate three proposals for time machines which share some common features: cosmic strings in relative motion, where the conical space time appears to allow CTC's; colliding gravitational shock waves, which in Aichelburg Sexl coordinates imply discontinuous geodesics; and the superluminal propagation of light in gravitational radiation metrics in a modified electrodynamics featuring violations of the strong equivalence principle. While we show that ultimately none of these constructions creates a working time machine, their study illustrates the subtle levels at which causal self-consistency imposes itself, and we consider what intuition can be drawn from these examples for future theories.

  5. Micro-machined resonator

    DOEpatents

    Godshall, Ned A. (Albuquerque, NM); Koehler, Dale R. (Albuquerque, NM); Liang, Alan Y. (Albuquerque, NM); Smith, Bradley K. (Albuquerque, NM)

    1993-01-01

    A micro-machined resonator, typically quartz, with upper and lower micro-machinable support members, or covers, having etched wells which may be lined with conductive electrode material, between the support members is a quartz resonator having an energy trapping quartz mesa capacitively coupled to the electrode through a diaphragm; the quartz resonator is supported by either micro-machined cantilever springs or by thin layers extending over the surfaces of the support. If the diaphragm is rigid, clock applications are available, and if the diaphragm is resilient, then transducer applications can be achieved. Either the thin support layers or the conductive electrode material can be integral with the diaphragm. In any event, the covers are bonded to form a hermetic seal and the interior volume may be filled with a gas or may be evacuated. In addition, one or both of the covers may include oscillator and interface circuitry for the resonator.

  6. Refrigerating machine oil

    SciTech Connect

    Nozawa, K.

    1981-03-17

    Refrigerating machine oil to be filled in a sealed motorcompressor unit constituting a refrigerating cycle system including an electric refrigerator, an electric cold-storage box, a small-scaled electric refrigerating show-case, a small-scaled electric cold-storage show-case and the like, is arranged to have a specifically enhanced property, in which smaller initial driving power consumption of the sealed motor-compressor and easier supply of the predetermined amount of the refrigerating machine oil to the refrigerating system are both guaranteed even in a rather low environmental temperature condition.

  7. New photolithography stepping machine

    SciTech Connect

    Hale, L.; Klingmann, J.; Markle, D.

    1995-03-08

    A joint development project to design a new photolithography steeping machine capable of 150 nanometer overlay accuracy was completed by Ultratech Stepper and the Lawrence Livermore National Laboratory. The principal result of the project is a next-generation product that will strengthen the US position in step-and-repeat photolithography. The significant challenges addressed and solved in the project are the subject of this report. Design methods and new devices that have broader application to precision machine design are presented in greater detail while project specific information serves primarily as background and motivation.

  8. Intersecting vane machines

    DOEpatents

    Bailey, H. Sterling; Chomyszak, Stephen M.

    2007-01-16

    The invention provides a toroidal intersecting vane machine incorporating intersecting rotors to form primary and secondary chambers whose porting configurations minimize friction and maximize efficiency. Specifically, it is an object of the invention to provide a toroidal intersecting vane machine that greatly reduces the frictional losses through meshing surfaces without the need for external gearing by modifying the function of one or the other of the rotors from that of "fluid moving" to that of "valving" thereby reducing the pressure loads and associated inefficiencies at the interface of the meshing surfaces. The inventions described herein relate to these improvements.

  9. Automated fiber pigtailing machine

    DOEpatents

    Strand, O.T.; Lowry, M.E.

    1999-01-05

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectronic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems. 26 figs.

  10. Automated fiber pigtailing machine

    DOEpatents

    Strand, Oliver T.; Lowry, Mark E.

    1999-01-01

    The Automated Fiber Pigtailing Machine (AFPM) aligns and attaches optical fibers to optoelectonic (OE) devices such as laser diodes, photodiodes, and waveguide devices without operator intervention. The so-called pigtailing process is completed with sub-micron accuracies in less than 3 minutes. The AFPM operates unattended for one hour, is modular in design and is compatible with a mass production manufacturing environment. This machine can be used to build components which are used in military aircraft navigation systems, computer systems, communications systems and in the construction of diagnostics and experimental systems.

  11. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  12. Paradigms for machine learning

    NASA Technical Reports Server (NTRS)

    Schlimmer, Jeffrey C.; Langley, Pat

    1991-01-01

    Five paradigms are described for machine learning: connectionist (neural network) methods, genetic algorithms and classifier systems, empirical methods for inducing rules and decision trees, analytic learning methods, and case-based approaches. Some dimensions are considered along with these paradigms vary in their approach to learning, and the basic methods are reviewed that are used within each framework, together with open research issues. It is argued that the similarities among the paradigms are more important than their differences, and that future work should attempt to bridge the existing boundaries. Finally, some recent developments in the field of machine learning are discussed, and their impact on both research and applications is examined.

  13. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2010-09-01

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  14. Investigation des correlations existant entre la perception de qualite audio et les reactions physiologiques d'un auditeur

    NASA Astrophysics Data System (ADS)

    Baudot, Matthias

    Les tests d'ecoute subjectifs permettent d'evaluer la fiabilite de reproduction des systemes de codage audio (codecs). Le projet presente ici vise a evaluer la possibilite d'utiliser les reactions physiologiques (activite electrodermale, cardiaque, musculaire et cerebrale) a la place d'une note donnee par l'auditeur, afin de caracteriser la performance d'un codec. Ceci permettrait d'avoir une methode d'evaluation plus proche de la perception reelle de qualite audio du sujet. Des tests d'ecoute mettant en oeuvre des degradations audio bien connues en concours avec la mesure des reactions physiologiques ont ete realises pour 4 auditeurs. L'analyse des resultats montre que certaines caracteristiques physiologiques permettent d'avoir une information fiable sur la qualite audio percue, et ce de maniere repetable pour pres de 70% des signaux audio testes chez un sujet, et pres de 60% des sequences audio testees chez tous les sujets. Ceci permet de postuler sur la faisabilite d'une telle methode d'evaluation subjective des codecs audio. Mots-cles : test d'ecoute subjectif, evaluation des codecs audio, mesures physiologiques, qualite audio percue, conductance electrodermale, photoplethysmographie, electromyogramme, electroencephalogramme

  15. Lagrange constraint neural network for audio varying BSS

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.

    2002-03-01

    Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).

  16. Reduction in time-to-sleep through EEG based brain state detection and audio stimulation.

    PubMed

    Zhuo Zhang; Cuntai Guan; Ti Eu Chan; Juanhong Yu; Aung Aung Phyo Wai; Chuanchu Wang; Haihong Zhang

    2015-08-01

    We developed an EEG- and audio-based sleep sensing and enhancing system, called iSleep (interactive Sleep enhancement apparatus). The system adopts a closed-loop approach which optimizes the audio recording selection based on user's sleep status detected through our online EEG computing algorithm. The iSleep prototype comprises two major parts: 1) a sleeping mask integrated with a single channel EEG electrode and amplifier, a pair of stereo earphones and a microcontroller with wireless circuit for control and data streaming; 2) a mobile app to receive EEG signals for online sleep monitoring and audio playback control. In this study we attempt to validate our hypothesis that appropriate audio stimulation in relation to brain state can induce faster onset of sleep and improve the quality of a nap. We conduct experiments on 28 healthy subjects, each undergoing two nap sessions - one with a quiet background and one with our audio-stimulation. We compare the time-to-sleep in both sessions between two groups of subjects, e.g., fast and slow sleep onset groups. The p-value obtained from Wilcoxon Signed Rank Test is 1.22e-04 for slow onset group, which demonstrates that iSleep can significantly reduce the time-to-sleep for people with difficulty in falling sleep. PMID:26738161

  17. A high efficiency PWM CMOS class-D audio power amplifier

    NASA Astrophysics Data System (ADS)

    Zhangming, Zhu; Lianxi, Liu; Yintang, Yang; Han, Lei

    2009-02-01

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 ÎĽm CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 ÎĽA. The active area of the class-D audio power amplifier is about 1.47 Ă— 1.52 mm2. With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  18. Three-dimensional audio versus head-down traffic alert and collision avoidance system displays.

    PubMed

    Begault, D R; Pittman, M T

    1996-01-01

    The advantage of a head-up auditory display for situational awareness was evaluated in an experiment designed to measure and compare the acquisition time for capturing visual targets under two conditions: standard head-down Traffic Alert and Collision Avoidance System display and three-dimensional (3-D) audio Traffic Alert and Collision Avoidance System presentation. (The technology used for 3-D audio presentation allows a stereo headphone user to potentially localize a sound at any externalized position in 3-D auditory space). Ten commercial airline crews were tested under full-mission simulation conditions at the NASA-Ames Crew-Vehicle Systems Research Facility Advanced Concepts Flight Simulator. Scenario software generated targets corresponding to aircraft that activated a 3-D aural advisory (the head-up auditory condition) or a standard, visual-audio TCAS advisory (map display with monaural audio alert). Results showed a significant difference in target acquisition time between the two conditions, favoring the 3-D audio Traffic Alert and Collision Avoidance System condition by 500 ms. PMID:11539173

  19. Audio-visual interaction and perceptual assessment of water features used over road traffic noise.

    PubMed

    Galbrun, Laurent; Calarco, Francesca M A

    2014-11-01

    This paper examines the audio-visual interaction and perception of water features used over road traffic noise, including their semantic aural properties, as well as their categorization and evocation properties. The research focused on a wide range of small to medium sized water features that can be used in gardens and parks to promote peacefulness and relaxation. Paired comparisons highlighted the inter-dependence between uni-modal (audio-only or visual-only) and bi-modal (audio-visual) perception, indicating that equal attention should be given to the design of both stimuli. In general, natural looking features tended to increase preference scores (compared to audio-only paired comparison scores), while manmade looking features decreased them. Semantic descriptors showed significant correlations with preferences and were found to be more reliable design criteria than physical parameters. A principal component analysis identified three components within the nine semantic attributes tested: "emotional assessment," "sound quality," and "envelopment and temporal variation." The first two showed significant correlations with audio-only preferences, "emotional assessment" being the most important predictor of preferences, and its attributes naturalness, relaxation, and freshness also being significantly correlated with preferences. Categorization results indicated that natural stream sounds are easily identifiable (unlike waterfalls and fountains), while evocation results showed no unique relationship with preferences. PMID:25373962

  20. Stress Reduction through Audio Distraction in Anxious Pediatric Dental Patients: An Adjunctive Clinical Study

    PubMed Central

    Samadi, Firoza; Jaiswal, JN; Tripathi, Abhay Mani

    2014-01-01

    ABSTRACT Aim: The purpose of the present study was to evaluate the eff-cacy of ‘audio distraction’ in anxious pediatric dental patients. Materials and methods: Sixty children were randomly selected and equally divided into two groups of thirty each. The first group was control group (group A) and the second group was music group (group B). The dental procedure employed was extraction for both the groups. The children included in music group were allowed to hear audio presentation throughout the treatment procedure. Anxiety was measured by using Venham's picture test, pulse rate, blood pressure and oxygen saturation. Results: ‘Audio distraction’ was found efficacious in alleviating anxiety of pediatric dental patients. Conclusion: ‘Audio distraction’ did decrease the anxiety in pediatric patients to a significant extent. How to cite this article: Singh D, Samadi F, Jaiswal JN, Tripathi AM. Stress Reduction through Audio Distraction in Anxious Pediatric Dental Patients: An Adjunctive Clinical Study. Int J Clin Pediatr Dent 2014;7(3):149-152. PMID:25709291

  1. Introduction to Exploring Machines

    ERIC Educational Resources Information Center

    Early Childhood Today, 2006

    2006-01-01

    Young children are fascinated by how things "work." They are at a stage of development where they want to experiment with the many ways to use an object or take things apart and put them back together. In the process of exploring tools and machines, children use the scientific method and problem-solving skills. They observe how things work, wonder…

  2. Electrical Discharge Machining.

    ERIC Educational Resources Information Center

    Montgomery, C. M.

    The manual is for use by students learning electrical discharge machining (EDM). It consists of eight units divided into several lessons, each designed to meet one of the stated objectives for the unit. The units deal with: introduction to and advantages of EDM, the EDM process, basic components of EDM, reaction between forming tool and workpiece,…

  3. A Turing Machine Simulator.

    ERIC Educational Resources Information Center

    Navarro, Aaron B.

    1981-01-01

    Presents a program in Level II BASIC for a TRS-80 computer that simulates a Turing machine and discusses the nature of the device. The program is run interactively and is designed to be used as an educational tool by computer science or mathematics students studying computational or automata theory. (MP)

  4. Machine Aids to Translation.

    ERIC Educational Resources Information Center

    Brinkmann, Karl-Heinz

    1981-01-01

    Describes the TEAM Program System of the Siemens Language Services Department, particularly the main features of its terminology data bank. Discusses criteria to which stored terminology must conform and methods of data bank utilization. Concludes by summarizing the consequences that machine-aided translation development has had for the…

  5. The Art Machine.

    ERIC Educational Resources Information Center

    Vertelney, Harry; Grossberger, Lucia

    1983-01-01

    Introduces educators to possibilities of computer graphics using an inexpensive computer system which takes advantage of existing equipment (35mm camera, super 8 movie camera, VHS video cassette recorder). The concept of the "art machine" is explained, highlighting input and output devices (X-Y plotter, graphic tablets, video digitizers). (EJS)

  6. Laser machining of explosives

    SciTech Connect

    Perry, Michael D.; Stuart, Brent C.; Banks, Paul S.; Myers, Booth R.; Sefcik, Joseph A.

    2000-01-01

    The invention consists of a method for machining (cutting, drilling, sculpting) of explosives (e.g., TNT, TATB, PETN, RDX, etc.). By using pulses of a duration in the range of 5 femtoseconds to 50 picoseconds, extremely precise and rapid machining can be achieved with essentially no heat or shock affected zone. In this method, material is removed by a nonthermal mechanism. A combination of multiphoton and collisional ionization creates a critical density plasma in a time scale much shorter than electron kinetic energy is transferred to the lattice. The resulting plasma is far from thermal equilibrium. The material is in essence converted from its initial solid-state directly into a fully ionized plasma on a time scale too short for thermal equilibrium to be established with the lattice. As a result, there is negligible heat conduction beyond the region removed resulting in negligible thermal stress or shock to the material beyond a few microns from the laser machined surface. Hydrodynamic expansion of the plasma eliminates the need for any ancillary techniques to remove material and produces extremely high quality machined surfaces. There is no detonation or deflagration of the explosive in the process and the material which is removed is rendered inert.

  7. Cybernetic anthropomorphic machine systems

    NASA Technical Reports Server (NTRS)

    Gray, W. E.

    1974-01-01

    Functional descriptions are provided for a number of cybernetic man machine systems that augment the capacity of normal human beings in the areas of strength, reach or physical size, and environmental interaction, and that are also applicable to aiding the neurologically handicapped. Teleoperators, computer control, exoskeletal devices, quadruped vehicles, space maintenance systems, and communications equipment are considered.

  8. Machine-Aided Indexing.

    ERIC Educational Resources Information Center

    Jacobs, Charles R.

    Progress is reported at the 1,000,000 word level on the development of a partial syntatic analysis technique for indexing text. A new indexing subroutine for hyphens is provided. New grammars written and programmed for Machine Aided Indexing (MAI) are discussed. (ED 069 290 is a related document) (Author)

  9. The Answer Machine.

    ERIC Educational Resources Information Center

    Feldman, Susan

    2000-01-01

    Discusses information retrieval systems and the need to have them adapt to user needs, integrate information in any format, reveal patterns and trends in information, and answer questions. Topics include statistics and probability; natural language processing; intelligent agents; concept mapping; machine-aided indexing; text mining; filtering;…

  10. Support vector machines

    NASA Technical Reports Server (NTRS)

    Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Wagstaff, Kiri

    2004-01-01

    Support Vector Machines (SVMs) are a type of supervised learning algorith,, other examples of which are Artificial Neural Networks (ANNs), Decision Trees, and Naive Bayesian Classifiers. Supervised learning algorithms are used to classify objects labled by a 'supervisor' - typically a human 'expert.'.

  11. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  12. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  13. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues. PMID:15850101

  14. BRASS FOUNDRY MACHINE ROOM USED TO MACHINE CAST BRONZE PIECES ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    BRASS FOUNDRY MACHINE ROOM USED TO MACHINE CAST BRONZE PIECES FOR VALVES AND PREPARE BRONZE VALVE BODIES FOR ASSEMBLY. - Stockham Pipe & Fittings Company, Brass Foundry, 4000 Tenth Avenue North, Birmingham, Jefferson County, AL

  15. 42. MACHINE SHOP Machine shop area with small parts bins ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    42. MACHINE SHOP Machine shop area with small parts bins on the right and pipe storage racks on the left. Remains of the power drive system are suspended from the ceiling. - Hovden Cannery, 886 Cannery Row, Monterey, Monterey County, CA

  16. 12. Photocopied August 1978. CHANNELING MACHINES, NOVEMBER 1898. THESE MACHINES ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopied August 1978. CHANNELING MACHINES, NOVEMBER 1898. THESE MACHINES BLOCKED OUT SECTIONS IN THE ROCK CUT IN PREPARATION FOR DRILLING AND BLASTING. (17) - Michigan Lake Superior Power Company, Portage Street, Sault Ste. Marie, Chippewa County, MI

  17. 8. VIEW OF THE MACHINE SHOP. BY 1966, THE MACHINE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VIEW OF THE MACHINE SHOP. BY 1966, THE MACHINE SHOP HANDLED PRIMARILY STAINLESS STEEL COMPONENTS, WHICH WERE SENT TO THE MACHINE SHOP TO BE FORMED INTO THEIR FINAL SHAPES. (7/24/70) - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  18. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  19. Multidimensional QoE of Multiview Video and Selectable Audio IP Transmission.

    PubMed

    Nunome, Toshiro; Ishida, Takuya

    2015-01-01

    We evaluate QoE of multiview video and selectable audio (MVV-SA), in which users can switch not only video but also audio according to a viewpoint change request, transmitted over IP networks by a subjective experiment. The evaluation is performed by the semantic differential (SD) method with 13 adjective pairs. In the subjective experiment, we ask assessors to evaluate 40 stimuli which consist of two kinds of UDP load traffic, two kinds of fixed additional delay, five kinds of playout buffering time, and selectable or unselectable audio (i.e., MVV-SA or the previous MVV-A). As a result, MVV-SA gives higher presence to the user than MVV-A and then enhances QoE. In addition, we employ factor analysis for subjective assessment results to clarify the component factors of QoE. We then find that three major factors affect QoE in MVV-SA. PMID:26106640

  20. Multidimensional QoE of Multiview Video and Selectable Audio IP Transmission

    PubMed Central

    Nunome, Toshiro; Ishida, Takuya

    2015-01-01

    We evaluate QoE of multiview video and selectable audio (MVV-SA), in which users can switch not only video but also audio according to a viewpoint change request, transmitted over IP networks by a subjective experiment. The evaluation is performed by the semantic differential (SD) method with 13 adjective pairs. In the subjective experiment, we ask assessors to evaluate 40 stimuli which consist of two kinds of UDP load traffic, two kinds of fixed additional delay, five kinds of playout buffering time, and selectable or unselectable audio (i.e., MVV-SA or the previous MVV-A). As a result, MVV-SA gives higher presence to the user than MVV-A and then enhances QoE. In addition, we employ factor analysis for subjective assessment results to clarify the component factors of QoE. We then find that three major factors affect QoE in MVV-SA. PMID:26106640

  1. Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans (L)

    PubMed Central

    Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth

    2007-01-01

    This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given. PMID:17069275

  2. Portable Fatigue-Testing Machine

    NASA Technical Reports Server (NTRS)

    Lewis, J.; Daugherty, C.

    1984-01-01

    Portable machine constructed for fatigue testing of structural materials or machinery parts subjected to fatigue loads. Piezoelectric crystal stack adds oscillatory force to constant force. Machine tests wider variety of objects than with usual rotating-beam fatigue tests.

  3. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  4. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  5. Audio-magnetotelluric survey to characterize the Sunnyside porphyry copper system in the Patagonia Mountains, Arizona

    USGS Publications Warehouse

    Sampson, Jay A.; Rodriguez, Brian D.

    2010-01-01

    The Sunnyside porphyry copper system is part of the concealed San Rafael Valley porphyry system located in the Patagonia Mountains of Arizona. The U.S. Geological Survey is conducting a series of multidisciplinary studies as part of the Assessment Techniques for Concealed Mineral Resources project. To help characterize the size, resistivity, and skin depth of the polarizable mineral deposit concealed beneath thick overburden, a regional east-west audio-magnetotelluric sounding profile was acquired. The purpose of this report is to release the audio-magnetotelluric sounding data collected along that east-west profile. No interpretation of the data is included.

  6. Instructional Insights: Audio Feedback as Means of Engaging the Occupational Therapy Student.

    PubMed

    Nielsen, Sarah K

    2016-01-01

    Constructivist learning approaches require faculty to engage students in the reflective learning process, yet students can begin to view this process as mundane and at times not engage in the process or utilize feedback provided. This article describes the results of applying audio feedback to overcome these obstacles in a practicum integration course. Student report and assignment performance indicated increased learning and engagement. The instructor found giving audio feedback more efficient than written feedback as it overcame inflection issues associated with the written word. Recorded files also alleviated additional student appointments for clarification of the feedback. PMID:26295848

  7. Soda pop vending machine injuries.

    PubMed

    Cosio, M Q

    1988-11-11

    Fifteen male patients, 15 to 24 years of age, sustained injuries after rocking soda machines. The machines fell onto the victims, resulting in a variety of injuries. Three were killed. The remaining 12 required hospitalization for their injuries. Unless changes are made to safeguard these machines, people will continue to suffer severe and possibly fatal injuries from what are largely preventable accidents. PMID:3184337

  8. Hydraulic Fatigue-Testing Machine

    NASA Technical Reports Server (NTRS)

    Hodo, James D.; Moore, Dennis R.; Morris, Thomas F.; Tiller, Newton G.

    1987-01-01

    Fatigue-testing machine applies fluctuating tension to number of specimens at same time. When sample breaks, machine continues to test remaining specimens. Series of tensile tests needed to determine fatigue properties of materials performed more rapidly than in conventional fatigue-testing machine.

  9. Sequence invariant state machines

    NASA Technical Reports Server (NTRS)

    Whitaker, S.; Manjunath, S.

    1990-01-01

    A synthesis method and new VLSI architecture are introduced to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. A design method is proposed that utilizes BTS logic to implement regular and dense circuits. A given state sequence can be programmed with power supply connections or dynamically reallocated if stored in a register. Arbitrary flow table sequences can be modified or programmed to dynamically alter the function of the machine. This allows VLSI controllers to be designed with the programmability of a general purpose processor but with the compact size and performance of dedicated logic.

  10. Prediction of Machine Tool Condition Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, Peigong; Meng, Qingfeng; Zhao, Jian; Li, Junjie; Wang, Xiufeng

    2011-07-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  11. Effect of Machining Velocity in Nanoscale Machining Operations

    NASA Astrophysics Data System (ADS)

    Islam, Sumaiya; Ibrahim, Raafat; Khondoker, Noman

    2015-04-01

    The aim of this study is to investigate the generated forces and deformations of single crystal Cu with (100), (110) and (111) crystallographic orientations at nanoscale machining operation. A nanoindenter equipped with nanoscratching attachment was used for machining operations and in-situ observation of a nano scale groove. As a machining parameter, the machining velocity was varied to measure the normal and cutting forces. At a fixed machining velocity, different levels of normal and cutting forces were generated due to different crystallographic orientations of the specimens. Moreover, after machining operation percentage of elastic recovery was measured and it was found that both the elastic and plastic deformations were responsible for producing a nano scale groove within the range of machining velocities from 250-1000 nm/s.

  12. An Examination of the Effectiveness of Embedded Audio Feedback for English as a Foreign Language Students in Asynchronous Online Discussions

    ERIC Educational Resources Information Center

    Olesova, Larisa A.

    2011-01-01

    This study examined the effect of asynchronous embedded audio feedback on English as a Foreign Language (EFL) students' higher-order learning and perception of the audio feedback versus text-based feedback when the students participated in asynchronous online discussions. In addition, this study examined how the impact and perceptions…

  13. 47 CFR 25.214 - Technical requirements for space stations in the satellite digital audio radio service and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Technical requirements for space stations in the satellite digital audio radio service and associated terrestrial repeaters. 25.214 Section 25.214... Technical Standards § 25.214 Technical requirements for space stations in the satellite digital audio...

  14. Audio and Video Podcasts of Lectures for Campus-Based Students: Production and Evaluation of Student Use

    ERIC Educational Resources Information Center

    Copley, Jonathan

    2007-01-01

    Podcasting has become a popular medium for accessing and assimilating information and podcasts are increasingly being used to deliver audio recordings of lectures to campus-based students. This paper describes a simple, cost-effective and file size-efficient method for producing video podcasts combining lecture slides and audio without a…

  15. Audio and Video Podcasts of Lectures for Campus-Based Students: Production and Evaluation of Student Use

    ERIC Educational Resources Information Center

    Copley, Jonathan

    2007-01-01

    Podcasting has become a popular medium for accessing and assimilating information and podcasts are increasingly being used to deliver audio recordings of lectures to campus-based students. This paper describes a simple, cost-effective and file size-efficient method for producing video podcasts combining lecture slides and audio without a…

  16. ACES Human Sexuality Training Network Handbook. A Compilation of Sexuality Course Syllabi and Audio-Visual Material.

    ERIC Educational Resources Information Center

    American Association for Counseling and Development, Alexandria, VA.

    This handbook contains a compilation of human sexuality course syllabi and audio-visual materials. It was developed to enable sex educators to identify and contact one another, to compile Human Sexuality Course Syllabi from across the country, and to bring to attention audio-visual materials which are available for teaching Human Sexuality…

  17. Audio-Tutorial Elementary School Science Instruction as a Method for Study of Children's Concept Learning: Particulate Nature of Matter

    ERIC Educational Resources Information Center

    Hibbard, K. Michael; Novak, Joseph D.

    1975-01-01

    The treatment group of first-graders received audio-tutorial instruction in the particulate nature of matter; the control group received audio-tutorial instruction in a nonscience subject. The treatment group used a particulate model to explain the nature of smells much more effectively than the control group. (MLH)

  18. An Introduction to Boiler Water Chemistry for the Marine Engineer: A Text of Audio-Tutorial Instruction.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.; And Others

    Presented is a manuscript for an introductory boiler water chemistry course for marine engineer education. The course is modular, self-paced, audio-tutorial, contract graded and combined lecture-laboratory instructed. Lectures are presented to students individually via audio-tapes and 35 mm slides. The course consists of a total of 17 modules -…

  19. An Examination of the Effectiveness of Embedded Audio Feedback for English as a Foreign Language Students in Asynchronous Online Discussions

    ERIC Educational Resources Information Center

    Olesova, Larisa A.

    2011-01-01

    This study examined the effect of asynchronous embedded audio feedback on English as a Foreign Language (EFL) students' higher-order learning and perception of the audio feedback versus text-based feedback when the students participated in asynchronous online discussions. In addition, this study examined how the impact and perceptions…

  20. GED Preparation via the Sundial Network. An Audio Teleconferencing System. Final Report. A 310/Special Demonstration Project 1984-1985.

    ERIC Educational Resources Information Center

    Rio Salado Community Coll., AZ.

    A project was conducted to deliver general educational development (GED) instruction through an audio teleconferencing system to adult students in Arizona. Using a previously existing audio teleconferencing system owned by Rio Salado Community College in Phoenix, Arizona, project staff developed a series of credit and noncredit teleconferencing…