Sample records for audio streaming technology

  1. Development and Assessment of Web Courses That Use Streaming Audio and Video Technologies.

    ERIC Educational Resources Information Center

    Ingebritsen, Thomas S.; Flickinger, Kathleen

    Iowa State University, through a program called Project BIO (Biology Instructional Outreach), has been using RealAudio technology for about 2 years in college biology courses that are offered entirely via the World Wide Web. RealAudio is a type of streaming media technology that can be used to deliver audio content and a variety of other media…

  2. Tune in the Net with RealAudio.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1997-01-01

    Describes how to connect to the RealAudio Web site to download a player that provides sound from Web pages to the computer through streaming technology. Explains hardware and software requirements and provides addresses for other RealAudio Web sites are provided, including weather information and current news. (LRW)

  3. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  4. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2014-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  5. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2008-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  6. Next-Gen Video

    ERIC Educational Resources Information Center

    Arnn, Barbara

    2007-01-01

    This article discusses how schools across the US are using the latest videoconference and audio/video streaming technologies creatively to move to the next level of their very specific needs. At the Georgia Institute of Technology in Atlanta, the technology that is the backbone of the school's extensive distance learning program has to be…

  7. Online Class Review: Using Streaming-Media Technology

    ERIC Educational Resources Information Center

    Loudon, Marc; Sharp, Mark

    2006-01-01

    We present an automated system that allows students to replay both audio and video from a large nonmajors' organic chemistry class as streaming RealMedia. Once established, this system requires no technical intervention and is virtually transparent to the instructor. This gives students access to online class review at any time. Assessment has…

  8. Video Streaming in Online Learning

    ERIC Educational Resources Information Center

    Hartsell, Taralynn; Yuen, Steve Chi-Yin

    2006-01-01

    The use of video in teaching and learning is a common practice in education today. As learning online becomes more of a common practice in education, streaming video and audio will play a bigger role in delivering course materials to online learners. This form of technology brings courses alive by allowing online learners to use their visual and…

  9. Structuring Broadcast Audio for Information Access

    NASA Astrophysics Data System (ADS)

    Gauvain, Jean-Luc; Lamel, Lori

    2003-12-01

    One rapidly expanding application area for state-of-the-art speech recognition technology is the automatic processing of broadcast audiovisual data for information access. Since much of the linguistic information is found in the audio channel, speech recognition is a key enabling technology which, when combined with information retrieval techniques, can be used for searching large audiovisual document collections. Audio indexing must take into account the specificities of audio data such as needing to deal with the continuous data stream and an imperfect word transcription. Other important considerations are dealing with language specificities and facilitating language portability. At Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), broadcast news transcription systems have been developed for seven languages: English, French, German, Mandarin, Portuguese, Spanish, and Arabic. The transcription systems have been integrated into prototype demonstrators for several application areas such as audio data mining, structuring audiovisual archives, selective dissemination of information, and topic tracking for media monitoring. As examples, this paper addresses the spoken document retrieval and topic tracking tasks.

  10. Video streaming into the mainstream.

    PubMed

    Garrison, W

    2001-12-01

    Changes in Internet technology are making possible the delivery of a richer mixture of media through data streaming. High-quality, dynamic content, such as video and audio, can be incorporated into Websites simply, flexibly and interactively. Technologies such as G3 mobile communication, ADSL, cable and satellites enable new ways of delivering medical services, information and learning. Systems such as Quicktime, Windows Media and Real Video provide reliable data streams as video-on-demand and users can tailor the experience to their own interests. The Learning Development Centre at the University of Portsmouth have used streaming technologies together with e-learning tools such as dynamic HTML, Flash, 3D objects and online assessment successfully to deliver on-line course content in economics and earth science. The Lifesign project--to develop, catalogue and stream health sciences media for teaching--is described and future medical applications are discussed.

  11. Digital Multicasting of Multiple Audio Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell; Bullock, John

    2007-01-01

    The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.

  12. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... programming stream at no direct charge to listeners. In addition, a broadcast radio station must simulcast its analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The emergency...

  13. A New Species of Science Education: Harnessing the Power of Interactive Technology to Teach Laboratory Science

    ERIC Educational Resources Information Center

    Reddy, Christopher

    2014-01-01

    Interactive television is a type of distance education that uses streaming audio and video technology for real-time student-teacher interaction. Here, I discuss the design and logistics for developing a high school laboratory-based science course taught to students at a distance using interactive technologies. The goal is to share a successful…

  14. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  15. Coexistence issues for a 2.4 GHz wireless audio streaming in presence of bluetooth paging and WLAN

    NASA Astrophysics Data System (ADS)

    Pfeiffer, F.; Rashwan, M.; Biebl, E.; Napholz, B.

    2015-11-01

    Nowadays, customers expect to integrate their mobile electronic devices (smartphones and laptops) in a vehicle to form a wireless network. Typically, IEEE 802.11 is used to provide a high-speed wireless local area network (WLAN) and Bluetooth is used for cable replacement applications in a wireless personal area network (PAN). In addition, Daimler uses KLEER as third wireless technology in the unlicensed (UL) 2.4 GHz-ISM-band to transmit full CD-quality digital audio. As Bluetooth, IEEE 802.11 and KLEER are operating in the same frequency band, it has to be ensured that all three technologies can be used simultaneously without interference. In this paper, we focus on the impact of Bluetooth and IEEE 802.11 as interferer in presence of a KLEER audio transmission.

  16. Robust audio-visual speech recognition under noisy audio-video conditions.

    PubMed

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  17. Real-Time Transmission and Storage of Video, Audio, and Health Data in Emergency and Home Care Situations

    NASA Astrophysics Data System (ADS)

    Barbieri, Ivano; Lambruschini, Paolo; Raggio, Marco; Stagnaro, Riccardo

    2007-12-01

    The increase in the availability of bandwidth for wireless links, network integration, and the computational power on fixed and mobile platforms at affordable costs allows nowadays for the handling of audio and video data, their quality making them suitable for medical application. These information streams can support both continuous monitoring and emergency situations. According to this scenario, the authors have developed and implemented the mobile communication system which is described in this paper. The system is based on ITU-T H.323 multimedia terminal recommendation, suitable for real-time data/video/audio and telemedical applications. The audio and video codecs, respectively, H.264 and G723.1, were implemented and optimized in order to obtain high performance on the system target processors. Offline media streaming storage and retrieval functionalities were supported by integrating a relational database in the hospital central system. The system is based on low-cost consumer technologies such as general packet radio service (GPRS) and wireless local area network (WLAN or WiFi) for lowband data/video transmission. Implementation and testing were carried out for medical emergency and telemedicine application. In this paper, the emergency case study is described.

  18. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  19. Audio Steganography with Embedded Text

    NASA Astrophysics Data System (ADS)

    Teck Jian, Chua; Chai Wen, Chuah; Rahman, Nurul Hidayah Binti Ab.; Hamid, Isredza Rahmi Binti A.

    2017-08-01

    Audio steganography is about hiding the secret message into the audio. It is a technique uses to secure the transmission of secret information or hide their existence. It also may provide confidentiality to secret message if the message is encrypted. To date most of the steganography software such as Mp3Stego and DeepSound use block cipher such as Advanced Encryption Standard or Data Encryption Standard to encrypt the secret message. It is a good practice for security. However, the encrypted message may become too long to embed in audio and cause distortion of cover audio if the secret message is too long. Hence, there is a need to encrypt the message with stream cipher before embedding the message into the audio. This is because stream cipher provides bit by bit encryption meanwhile block cipher provide a fixed length of bits encryption which result a longer output compare to stream cipher. Hence, an audio steganography with embedding text with Rivest Cipher 4 encryption cipher is design, develop and test in this project.

  20. Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.

    2010-01-01

    From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.

  1. Enhancing Online Education Using Collaboration Solutions

    ERIC Educational Resources Information Center

    Ge, Shuzhi Sam; Tok, Meng Yong

    2003-01-01

    With the advances in Internet technologies, online education is fast gaining ground as an extension to traditional education. Webcast allows lectures conducted on campus to be viewed by students located at remote sites by streaming the audio and video content over Internet Protocol (IP) networks. However when used alone, webcast does not provide…

  2. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    ERIC Educational Resources Information Center

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  3. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  4. Singingfish: Advancing the Art of Multimedia Search.

    ERIC Educational Resources Information Center

    Fritz, Mark

    2003-01-01

    Singingfish provides multimedia search services that enable Internet users to locate audio and video online. Over the last few years, the company has cataloged and indexed over 30 million streams and downloadable MP3s, with 150,000 to 250,000 more being added weekly. This article discusses a deal with Microsoft; the technology; improving the…

  5. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  6. Co-streaming classes: a follow-up study in improving the user experience to better reach users.

    PubMed

    Hayes, Barrie E; Handler, Lara J; Main, Lindsey R

    2011-01-01

    Co-streaming classes have enabled library staff to extend open classes to distance education students and other users. Student evaluations showed that the model could be improved. Two areas required attention: audio problems experienced by online participants and staff teaching methods. Staff tested equipment and adjusted software configuration to improve user experience. Staff training increased familiarity with specialized teaching techniques and troubleshooting procedures. Technology testing and staff training were completed, and best practices were developed and applied. Class evaluations indicate improvements in classroom experience. Future plans include expanding co-streaming to more classes and on-going data collection, evaluation, and improvement of classes.

  7. The Evolution of Qualitative and Quantitative Research Classes when Delivered via Distance Education.

    ERIC Educational Resources Information Center

    Hecht, Jeffrey B.; Klass, Patricia H.

    This study examined whether new streamed Internet audio and video technology could be used for primary instruction in off-campus research classes. Several different off-campus student cohorts at Illinois State university enrolled in both a fall semester qualitative research methods class and a spring semester quantitative research methods class.…

  8. StreamWorks: the live and on-demand audio/video server and its applications in medical information systems

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Gordon, Howard; Palisson, Patrice M.; Prost, Remy; Goutte, Robert

    1996-05-01

    Facing a world undergoing fundamental and rapid change, healthcare organizations are seeking ways to increase innovation, quality, productivity, and patient value, keys to more effective care. Individual clinics acting alone can respond in only a limited way, so re- engineering the process key which services are delivered demands real-time collaborative technology that provides immediate information sharing, improving the management and coordination of information in cross-functional teams. StreamWorks is a development stage architecture that uses a distribution technique to deliver an advanced information management system for telemedicine. The challenge of StreamWorks in telemedicine is to enable equity of the quality of Health Care of Telecommunications and Information Technology also to patients in less favored regions, like India or China, where the quality of medical care varies greatly by region, but where there are some very current communications facilities.

  9. Eye movements while viewing narrated, captioned, and silent videos

    PubMed Central

    Ross, Nicholas M.; Kowler, Eileen

    2013-01-01

    Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357

  10. Tera-node Network Technology (Task 3) Scalable Personal Telecommunications

    DTIC Science & Technology

    2000-03-14

    Simulation results of this work may be found in http://north.east.isi.edu/spt/ audio.html. 6. Internet Research Task Force Reliable Multicast...Adaptation, 4. Multimedia Proxy Caching, 5. Experiments with the Rate Adaptation Protocol (RAP) 6. Providing leadership and innovation to the Internet ... Research Task Force (IRTF) Reliable Multicast Research Group (RMRG) 1. End-to-end Architecture for Quality-adaptive Streaming Applications over the

  11. Constructing a Streaming Video-Based Learning Forum for Collaborative Learning

    ERIC Educational Resources Information Center

    Chang, Chih-Kai

    2004-01-01

    As web-based courses using videos have become popular in recent years, the issue of managing audio-visual aids has become pertinent. Generally, the contents of audio-visual aids may include a lecture, an interview, a report, or an experiment, which may be transformed into a streaming format capable of making the quality of Internet-based videos…

  12. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  13. About subjective evaluation of adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Brunnström, Kjell; Garcia, Narciso

    2015-03-01

    The usage of HTTP Adaptive Streaming (HAS) technology by content providers is increasing rapidly. Having available the video content in multiple qualities, using HAS allows to adapt the quality of downloaded video to the current network conditions providing smooth video-playback. However, the time-varying video quality by itself introduces a new type of impairment. The quality adaptation can be done in different ways. In order to find the best adaptation strategy maximizing users perceptual quality it is necessary to investigate about the subjective perception of adaptation-related impairments. However, the novelties of these impairments and their comparably long time duration make most of the standardized assessment methodologies fall less suited for studying HAS degradation. Furthermore, in traditional testing methodologies, the quality of the video in audiovisual services is often evaluated separated and not in the presence of audio. Nevertheless, the requirement of jointly evaluating the audio and the video within a subjective test is a relatively under-explored research field. In this work, we address the research question of determining the appropriate assessment methodology to evaluate the sequences with time-varying quality due to the adaptation. This was done by studying the influence of different adaptation related parameters through two different subjective experiments using a methodology developed to evaluate long test sequences. In order to study the impact of audio presence on quality assessment by the test subjects, one of the experiments was done in the presence of audio stimuli. The experimental results were subsequently compared with another experiment using the standardized single stimulus Absolute Category Rating (ACR) methodology.

  14. Method and apparatus for obtaining complete speech signals for speech recognition applications

    NASA Technical Reports Server (NTRS)

    Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)

    2009-01-01

    The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.

  15. Podcasting by Synchronising PowerPoint and Voice: What Are the Pedagogical Benefits?

    ERIC Educational Resources Information Center

    Griffin, Darren K.; Mitchell, David; Thompson, Simon J.

    2009-01-01

    The purpose of this study was to investigate the efficacy of audio-visual synchrony in podcasting and its possible pedagogical benefits. "Synchrony" in this study refers to the simultaneous playback of audio and video data streams, so that the transitions between presentation slides occur at "lecturer chosen" points in the audio commentary.…

  16. A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet

    ERIC Educational Resources Information Center

    Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.

    2006-01-01

    Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…

  17. JXTA: A Technology Facilitating Mobile P2P Health Management System

    PubMed Central

    Rajkumar, Rajasekaran; Nallani Chackravatula Sriman, Narayana Iyengar

    2012-01-01

    Objectives Mobile JXTA (Juxtapose) gaining momentum and has attracted the interest of doctors and patients through P2P service that transmits messages. Audio and video can also be transmitted through JXTA. The use of mobile streaming mechanism with the support of mobile hospital management and healthcare system would enable better interaction between doctors, nurses, and the hospital. Experimental results demonstrate good performance in comparison with conventional systems. This study evaluates P2P JXTA/JXME (JXTA functionality to MIDP devices.) which facilitates peer-to-peer application+ using mobile-constraint devices. Also a proven learning algorithm was used to automatically send and process sorted patient data to nurses. Methods From December 2010 to December 2011, a total of 500 patients were referred to our hospital due to minor health problems and were monitored. We selected all of the peer groups and the control server, which controlled the BMO (Block Medical Officer) peer groups and BMO through the doctor peer groups, and prescriptions were delivered to the patient’s mobile phones through the JXTA/ JXME network. Results All 500 patients were registered in the JXTA network. Among these, 300 patient histories were referred to the record peer group by the doctors, 100 patients were referred to the external doctor peer group, and 100 patients were registered as new users in the JXTA/JXME network. Conclusion This system was developed for mobile streaming applications and was designed to support the mobile health management system using JXTA/ JXME. The simulated results show that this system can carry out streaming audio and video applications. Controlling and monitoring by the doctor peer group makes the system more flexible and structured. Enhanced studies are needed to improve knowledge mining and cloud-based M health management technology in comparison with the traditional system. PMID:24159509

  18. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  19. Streaming Media Seminar--Effective Development and Distribution of Streaming Multimedia in Education

    ERIC Educational Resources Information Center

    Mainhart, Robert; Gerraughty, James; Anderson, Kristine M.

    2004-01-01

    Concisely defined, "streaming media" is moving video and/or audio transmitted over the Internet for immediate viewing/listening by an end user. However, at Saint Francis University's Center of Excellence for Remote and Medically Under-Served Areas (CERMUSA), streaming media is approached from a broader perspective. The working definition includes…

  20. Data streaming in telepresence environments.

    PubMed

    Lamboray, Edouard; Würmlin, Stephan; Gross, Markus

    2005-01-01

    In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system-the blue-c-we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.

  1. Audio stream classification for multimedia database search

    NASA Astrophysics Data System (ADS)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  2. 47 CFR 73.1201 - Station identification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... offerings. Television and Class A television broadcast stations may make these announcements visually or... multicast audio programming streams, in a manner that appropriately alerts its audience to the fact that it is listening to a digital audio broadcast. No other insertion between the station's call letters and...

  3. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  4. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode.

  5. Delivering Instruction via Streaming Media: A Higher Education Perspective.

    ERIC Educational Resources Information Center

    Mortensen, Mark; Schlieve, Paul; Young, Jon

    2000-01-01

    Describes streaming media, an audio/video presentation that is delivered across a network so that it is viewed while being downloaded onto the user's computer, including a continuous stream of video that can be pre-recorded or live. Discusses its use for nontraditional students in higher education and reports on implementation experiences. (LRW)

  6. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  7. Focus on the post-DVD formats

    NASA Astrophysics Data System (ADS)

    He, Hong; Wei, Jingsong

    2005-09-01

    As the digital TV(DTV) technologies are developing rapidly on its standard system, hardware desktop, software model, and interfaces between DTV and the home net, High Definition TV (HDTV) program worldwide broadcasting is scheduled. Enjoying high quality TV program at home is not a far-off dream for people. As for the main recording media, what would the main stream be for the optical storage technology to meet the HDTV requirements is becoming a great concern. At present, there are a few kinds of Post-DVD formats which are competing on technology, standard and market. Here we give a review on the co-existing Post-DVD formats in the world. We will discuss on the basic parameters for optical disk, video /audio coding strategy and system performance for HDTV program.

  8. A digital audio/video interleaving system. [for Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Richards, R. W.

    1978-01-01

    A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.

  9. Multimodal Speaker Diarization.

    PubMed

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  10. Remotely supported prehospital ultrasound: A feasibility study of real-time image transmission and expert guidance to aid diagnosis in remote and rural communities.

    PubMed

    Eadie, Leila; Mulhern, John; Regan, Luke; Mort, Alasdair; Shannon, Helen; Macaden, Ashish; Wilson, Philip

    2017-01-01

    Introduction Our aim is to expedite prehospital assessment of remote and rural patients using remotely-supported ultrasound and satellite/cellular communications. In this paradigm, paramedics are remotely-supported ultrasound operators, guided by hospital-based specialists, to record images before receiving diagnostic advice. Technology can support users in areas with little access to medical imaging and suboptimal communications coverage by connecting to multiple cellular networks and/or satellites to stream live ultrasound and audio-video. Methods An ambulance-based demonstrator system captured standard trauma and novel transcranial ultrasound scans from 10 healthy volunteers at 16 locations across the Scottish Highlands. Volunteers underwent brief scanning training before receiving expert guidance via the communications link. Ultrasound images were streamed with an audio/video feed to reviewers for interpretation. Two sessions were transmitted via satellite and 21 used cellular networks. Reviewers rated image and communication quality, and their utility for diagnosis. Transmission latency and bandwidth were recorded, and effects of scanner and reviewer experience were assessed. Results Appropriate views were provided in 94% of the simulated trauma scans. The mean upload rate was 835/150 kbps and mean latency was 114/2072 ms for cellular and satellite networks, respectively. Scanning experience had a significant impact on time to achieve a diagnostic image, and review of offline scans required significantly less time than live-streamed scans. Discussion This prehospital ultrasound system could facilitate early diagnosis and streamlining of treatment pathways for remote emergency patients, being particularly applicable in rural areas worldwide with poor communications infrastructure and extensive transport times.

  11. Combining Live Video and Audio Broadcasting, Synchronous Chat, and Asynchronous Open Forum Discussions in Distance Education

    ERIC Educational Resources Information Center

    Teng, Tian-Lih; Taveras, Marypat

    2004-01-01

    This article outlines the evolution of a unique distance education program that began as a hybrid--combining face-to-face instruction with asynchronous online teaching--and evolved to become an innovative combination of synchronous education using live streaming video, audio, and chat over the Internet, blended with asynchronous online discussions…

  12. Telearch - Integrated visual simulation environment for collaborative virtual archaeology.

    NASA Astrophysics Data System (ADS)

    Kurillo, Gregorij; Forte, Maurizio

    Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.

  13. Securing Digital Audio using Complex Quadratic Map

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  14. SNR-adaptive stream weighting for audio-MES ASR.

    PubMed

    Lee, Ki-Seung

    2008-08-01

    Myoelectric signals (MESs) from the speaker's mouth region have been successfully shown to improve the noise robustness of automatic speech recognizers (ASRs), thus promising to extend their usability in implementing noise-robust ASR. In the recognition system presented herein, extracted audio and facial MES features were integrated by a decision fusion method, where the likelihood score of the audio-MES observation vector was given by a linear combination of class-conditional observation log-likelihoods of two classifiers, using appropriate weights. We developed a weighting process adaptive to SNRs. The main objective of the paper involves determining the optimal SNR classification boundaries and constructing a set of optimum stream weights for each SNR class. These two parameters were determined by a method based on a maximum mutual information criterion. Acoustic and facial MES data were collected from five subjects, using a 60-word vocabulary. Four types of acoustic noise including babble, car, aircraft, and white noise were acoustically added to clean speech signals with SNR ranging from -14 to 31 dB. The classification accuracy of the audio ASR was as low as 25.5%. Whereas, the classification accuracy of the MES ASR was 85.2%. The classification accuracy could be further improved by employing the proposed audio-MES weighting method, which was as high as 89.4% in the case of babble noise. A similar result was also found for the other types of noise.

  15. The Use of Asynchronous Audio Feedback with Online RN-BSN Students

    ERIC Educational Resources Information Center

    London, Julie E.

    2013-01-01

    The use of audio technology by online nursing educators is a recent phenomenon. Research has been conducted in the area of audio technology in different domains and populations, but very few researchers have focused on nursing. Preliminary results have indicated that using audio in place of text can increase student cognition and socialization.…

  16. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    NASA Astrophysics Data System (ADS)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  17. Feature Representations for Neuromorphic Audio Spike Streams.

    PubMed

    Anumula, Jithendar; Neil, Daniel; Delbruck, Tobi; Liu, Shih-Chii

    2018-01-01

    Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset.

  18. Feature Representations for Neuromorphic Audio Spike Streams

    PubMed Central

    Anumula, Jithendar; Neil, Daniel; Delbruck, Tobi; Liu, Shih-Chii

    2018-01-01

    Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset. PMID:29479300

  19. Aerospace Communications Security Technologies Demonstrated

    NASA Technical Reports Server (NTRS)

    Griner, James H.; Martzaklis, Konstantinos S.

    2003-01-01

    In light of the events of September 11, 2001, NASA senior management requested an investigation of technologies and concepts to enhance aviation security. The investigation was to focus on near-term technologies that could be demonstrated within 90 days and implemented in less than 2 years. In response to this request, an internal NASA Glenn Research Center Communications, Navigation, and Surveillance Aviation Security Tiger Team was assembled. The 2-year plan developed by the team included an investigation of multiple aviation security concepts, multiple aircraft platforms, and extensively leveraged datalink communications technologies. It incorporated industry partners from NASA's Graphical Weather-in-the-Cockpit research, which is within NASA's Aviation Safety Program. Two concepts from the plan were selected for demonstration: remote "black box," and cockpit/cabin surveillance. The remote "black box" concept involves real-time downlinking of aircraft parameters for remote monitoring and archiving of aircraft data, which would assure access to the data following the loss or inaccessibility of an aircraft. The cockpit/cabin surveillance concept involves remote audio and/or visual surveillance of cockpit and cabin activity, which would allow immediate response to any security breach and would serve as a possible deterrent to such breaches. The datalink selected for the demonstrations was VDL Mode 2 (VHF digital link), the first digital datalink for air-ground communications designed for aircraft use. VDL Mode 2 is beginning to be implemented through the deployment of ground stations and aircraft avionics installations, with the goal of being operational in 2 years. The first demonstration was performed December 3, 2001, onboard the LearJet 25 at Glenn. NASA worked with Honeywell, Inc., for the broadcast VDL Mode 2 datalink capability and with actual Boeing 757 aircraft data. This demonstration used a cockpitmounted camera for video surveillance and a coupling to the intercom system for audio surveillance. Audio, video, and "black box" data were simultaneously streamed to the ground, where they were displayed to a Glenn audience of senior management and aviation security team members.

  20. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  1. Application discussion of source coding standard in voyage data recorder

    NASA Astrophysics Data System (ADS)

    Zong, Yonggang; Zhao, Xiandong

    2018-04-01

    This paper analyzes the disadvantages of the audio and video compression coding technology used by Voyage Data Recorder, and combines the improvement of performance of audio and video acquisition equipment. The thinking of improving the audio and video compression coding technology of the voyage data recorder is proposed, and the feasibility of adopting the new compression coding technology is analyzed from economy and technology two aspects.

  2. Maximizing ship-to-shore connections via telepresence technologies

    NASA Astrophysics Data System (ADS)

    Fundis, A. T.; Kelley, D. S.; Proskurowski, G.; Delaney, J. R.

    2012-12-01

    Live connections to offshore oceanographic research via telepresence technologies enable onshore scientists, students, and the public to observe and participate in active research as it is happening. As part of the ongoing construction effort of the NSF's Ocean Observatories Initiative's cabled network, the VISIONS'12 expedition included a wide breadth of activities to allow the public, students, and scientists to interact with a sea-going expedition. Here we describe our successes and lessons learned in engaging these onshore audiences through the various outreach efforts employed during the expedition including: 1) live high-resolution video and audio streams from the seafloor and ship; 2) live connections to science centers, aquaria, movie theaters, and undergraduate classrooms; 3) social media interactions; and 4) an onboard immersion experience for undergraduate and graduate students.

  3. The Function of Consciousness in Multisensory Integration

    ERIC Educational Resources Information Center

    Palmer, Terry D.; Ramsey, Ashley K.

    2012-01-01

    The function of consciousness was explored in two contexts of audio-visual speech, cross-modal visual attention guidance and McGurk cross-modal integration. Experiments 1, 2, and 3 utilized a novel cueing paradigm in which two different flash suppressed lip-streams cooccured with speech sounds matching one of these streams. A visual target was…

  4. Developing a Consensus-Driven, Core Competency Model to Shape Future Audio Engineering Technology Curriculum: A Web-Based Modified Delphi Study

    ERIC Educational Resources Information Center

    Tough, David T.

    2009-01-01

    The purpose of this online study was to create a ranking of essential core competencies and technologies required by AET (audio engineering technology) programs 10 years in the future. The study was designed to facilitate curriculum development and improvement in the rapidly expanding number of small to medium sized audio engineering technology…

  5. Audio in Courseware: Design Knowledge Issues.

    ERIC Educational Resources Information Center

    Aarntzen, Diana

    1993-01-01

    Considers issues that need to be addressed when incorporating audio in courseware design. Topics discussed include functions of audio in courseware; the relationship between auditive and visual information; learner characteristics in relation to audio; events of instruction; and audio characteristics, including interactivity and speech technology.…

  6. Audio Spectrogram Representations for Processing with Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wyse, L.

    2017-05-01

    One of the decisions that arise when designing a neural network for any application is how the data should be represented in order to be presented to, and possibly generated by, a neural network. For audio, the choice is less obvious than it seems to be for visual images, and a variety of representations have been used for different applications including the raw digitized sample stream, hand-crafted features, machine discovered features, MFCCs and variants that include deltas, and a variety of spectral representations. This paper reviews some of these representations and issues that arise, focusing particularly on spectrograms for generating audio using neural networks for style transfer.

  7. Communicative Competence in Audio Classrooms: A Position Paper for the CADE 1991 Conference.

    ERIC Educational Resources Information Center

    Burge, Liz

    Classroom practitioners need to move their attention away from the technological and logistical competencies required for audio conferencing (AC) to the required communicative competencies in order to advance their skills in handling the psychodynamics of audio virtual classrooms which include audio alone and audio with graphics. While the…

  8. Summarizing Audiovisual Contents of a Video Program

    NASA Astrophysics Data System (ADS)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  9. Real World Audio

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Crystal River Engineering was originally featured in Spinoff 1992 with the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. The Convolvotron was developed for Ames' research on virtual acoustic displays. Crystal River is a now a subsidiary of Aureal Semiconductor, Inc. and they together develop and market the technology, which is a 3-D (three dimensional) audio technology known commercially today as Aureal 3D (A-3D). The technology has been incorporated into video games, surround sound systems, and sound cards.

  10. The sweet-home project: audio technology in smart homes to improve well-being and reliance.

    PubMed

    Vacher, Michel; Istrate, Dan; Portet, François; Joubert, Thierry; Chevalier, Thierry; Smidtas, Serge; Meillon, Brigitte; Lecouteux, Benjamin; Sehili, Mohamed; Chahuara, Pedro; Méniard, Sylvain

    2011-01-01

    The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the multimodal sound corpus acquisition and labelling and on the investigated techniques for speech and sound recognition. The user study and the recognition performances show the interest of this audio technology.

  11. Neural network retuning and neural predictors of learning success associated with cello training.

    PubMed

    Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J

    2018-06-26

    The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.

  12. StreaMorph: A Case for Synthesizing Energy-Efficient Adaptive Programs Using High-Level Abstractions

    DTIC Science & Technology

    2013-08-12

    technique when switching from using eight cores to one core. 1. Introduction Real - time streaming of media data is growing in popularity. This includes...both capture and processing of real - time video and audio, and delivery of video and audio from servers; recent usage number shows over 800 million...source of data, when that source is a real - time source, and it is generally not necessary to get ahead of the sink. Even with real - time sources and sinks

  13. Using online handwriting and audio streams for mathematical expressions recognition: a bimodal approach

    NASA Astrophysics Data System (ADS)

    Medjkoune, Sofiane; Mouchère, Harold; Petitrenaud, Simon; Viard-Gaudin, Christian

    2013-01-01

    The work reported in this paper concerns the problem of mathematical expressions recognition. This task is known to be a very hard one. We propose to alleviate the difficulties by taking into account two complementary modalities. The modalities referred to are handwriting and audio ones. To combine the signals coming from both modalities, various fusion methods are explored. Performances evaluated on the HAMEX dataset show a significant improvement compared to a single modality (handwriting) based system.

  14. Robust Radio Broadcast Monitoring Using a Multi-Band Spectral Entropy Signature

    NASA Astrophysics Data System (ADS)

    Camarena-Ibarrola, Antonio; Chávez, Edgar; Tellez, Eric Sadit

    Monitoring media broadcast content has deserved a lot of attention lately from both academy and industry due to the technical challenge involved and its economic importance (e.g. in advertising). The problem pose a unique challenge from the pattern recognition point of view because a very high recognition rate is needed under non ideal conditions. The problem consist in comparing a small audio sequence (the commercial ad) with a large audio stream (the broadcast) searching for matches.

  15. The CloudBoard Research Platform: an interactive whiteboard for corporate users

    NASA Astrophysics Data System (ADS)

    Barrus, John; Schwartz, Edward L.

    2013-03-01

    Over one million interactive whiteboards (IWBs) are sold annually worldwide, predominantly for classroom use with few sales for corporate use. Unmet needs for IWB corporate use were investigated and the CloudBoard Research Platform (CBRP) was developed to investigate and test technology for meeting these needs. The CBRP supports audio conferencing with shared remote drawing activity, casual capture of whiteboard activity for long-term storage and retrieval, use of standard formats such as PDF for easy import of documents via the web and email and easy export of documents. Company RFID badges and key fobs provide secure access to documents at the board and automatic logout occurs after a period of inactivity. Users manage their documents with a web browser. Analytics and remote device management is provided for administrators. The IWB hardware consists of off-the-shelf components (a Hitachi UST Projector, SMART Technologies, Inc. IWB hardware, Mac Mini, Polycom speakerphone, etc.) and a custom occupancy sensor. The three back-end servers provide the web interface, document storage, stroke and audio streaming. Ease of use, security, and robustness sufficient for internal adoption was achieved. Five of the 10 boards installed at various Ricoh sites have been in daily or weekly use for the past year and total system downtime was less than an hour in 2012. Since CBRP was installed, 65 registered users, 9 of whom use the system regularly, have created over 2600 documents.

  16. Cross-Modal Approach for Karaoke Artifacts Correction

    NASA Astrophysics Data System (ADS)

    Yan, Wei-Qi; Kankanhalli, Mohan S.

    In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment κ= {κ (t) : κ (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (κ ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U '(t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.

  17. Cross-Modal Approach for Karaoke Artifacts Correction

    NASA Astrophysics Data System (ADS)

    Yan, Wei-Qi; Kankanhalli, Mohan S.

    In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment kappa= {kappa (t) : kappa (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (kappa ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U ' (t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.

  18. The power of digital audio in interactive instruction: An unexploited medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, J.; Trainor, M.

    1989-01-01

    Widespread use of audio in computer-based training (CBT) occurred with the advent of the interactive videodisc technology. This paper discusses the alternative of digital audio, which, unlike videodisc audio, enables one to rapidly revise the audio used in the CBT and which may be used in nonvideo CBT applications as well. We also discuss techniques used in audio script writing, editing, and production. Results from evaluations indicate a high degree of user satisfaction. 4 refs.

  19. Promoting Independence through Assistive Technology: Evaluating Audio Recorders to Support Grocery Shopping

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Satsangi, Rajiv; Bartlett, Whitney; Weng, Pei-Lin

    2012-01-01

    In light of a positive research base regarding technology-based self-operating prompting systems (e.g., iPods), yet a concern about the sustainability of such technologies after a research project is completed, this study sought to explore the effectiveness and efficiency of an audio recorder, a low-cost, more commonly accessible technology to…

  20. Multi-stream face recognition on dedicated mobile devices for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2006-09-01

    Automatic face recognition is a useful tool in the fight against crime and terrorism. Technological advance in mobile communication systems and multi-application mobile devices enable the creation of hybrid platforms for active and passive surveillance. A dedicated mobile device that incorporates audio-visual sensors would not only complement existing networks of fixed surveillance devices (e.g. CCTV) but could also provide wide geographical coverage in almost any situation and anywhere. Such a device can hold a small portion of a law-enforcing agency biometric database that consist of audio and/or visual data of a number of suspects/wanted or missing persons who are expected to be in a local geographical area. This will assist law-enforcing officers on the ground in identifying persons whose biometric templates are downloaded onto their devices. Biometric data on the device can be regularly updated which will reduce the number of faces an officer has to remember. Such a dedicated device would act as an active/passive mobile surveillance unit that incorporate automatic identification. This paper is concerned with the feasibility of using wavelet-based face recognition schemes on such devices. The proposed schemes extend our recently developed face verification scheme for implementation on a currently available PDA. In particular we will investigate the use of a combination of wavelet frequency channels for multi-stream face recognition. We shall present experimental results on the performance of our proposed schemes for a number of publicly available face databases including a new AV database of videos recorded on a PDA.

  1. Using Technology to Improve Student Learning. NCREL Viewpoints, Volume 12

    ERIC Educational Resources Information Center

    Gahala, Jan, Ed.

    2004-01-01

    "Viewpoints" is a multimedia package containing two audio CDs and a short, informative booklet. This volume of "Viewpoints" focuses on how technology can help improve student learning. The audio CDs provide the voices, or viewpoints, of various leaders from the education field who work closely with technology issues. Their…

  2. Application Layer Multicast

    NASA Astrophysics Data System (ADS)

    Allani, Mouna; Garbinato, Benoît; Pedone, Fernando

    An increasing number of Peer-to-Peer (P2P) Internet applications rely today on data dissemination as their cornerstone, e.g., audio or video streaming, multi-party games. These applications typically depend on some support for multicast communication, where peers interested in a given data stream can join a corresponding multicast group. As a consequence, the efficiency, scalability, and reliability guarantees of these applications are tightly coupled with that of the underlying multicast mechanism.

  3. Atomization of metal (Materials Preparation Center)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Atomization of metal requires high pressure gas and specialized chambers for cooling and collecting the powders without contamination. The critical step for morphological control is the impingement of the gas on the melt stream. The video is a color video of a liquid metal stream being atomized by high pressure gas. This material was cast at the Ames Laboratory's Materials Preparation Center http://www.mpc.ameslab.gov WARNING - AUDIO IS LOUD.

  4. Design and develop a video conferencing framework for real-time telemedicine applications using secure group-based communication architecture.

    PubMed

    Mat Kiah, M L; Al-Bakri, S H; Zaidan, A A; Zaidan, B B; Hussain, Muzammil

    2014-10-01

    One of the applications of modern technology in telemedicine is video conferencing. An alternative to traveling to attend a conference or meeting, video conferencing is becoming increasingly popular among hospitals. By using this technology, doctors can help patients who are unable to physically visit hospitals. Video conferencing particularly benefits patients from rural areas, where good doctors are not always available. Telemedicine has proven to be a blessing to patients who have no access to the best treatment. A telemedicine system consists of customized hardware and software at two locations, namely, at the patient's and the doctor's end. In such cases, the video streams of the conferencing parties may contain highly sensitive information. Thus, real-time data security is one of the most important requirements when designing video conferencing systems. This study proposes a secure framework for video conferencing systems and a complete management solution for secure video conferencing groups. Java Media Framework Application Programming Interface classes are used to design and test the proposed secure framework. Real-time Transport Protocol over User Datagram Protocol is used to transmit the encrypted audio and video streams, and RSA and AES algorithms are used to provide the required security services. Results show that the encryption algorithm insignificantly increases the video conferencing computation time.

  5. Implementing Audio-CASI on Windows’ Platforms

    PubMed Central

    Cooley, Philip C.; Turner, Charles F.

    2011-01-01

    Audio computer-assisted self interviewing (Audio-CASI) technologies have recently been shown to provide important and sometimes dramatic improvements in the quality of survey measurements. This is particularly true for measurements requiring respondents to divulge highly sensitive information such as their sexual, drug use, or other sensitive behaviors. However, DOS-based Audio-CASI systems that were designed and adopted in the early 1990s have important limitations. Most salient is the poor control they provide for manipulating the video presentation of survey questions. This article reports our experiences adapting Audio-CASI to Microsoft Windows 3.1 and Windows 95 platforms. Overall, our Windows-based system provided the desired control over video presentation and afforded other advantages including compatibility with a much wider array of audio devices than our DOS-based Audio-CASI technologies. These advantages came at the cost of increased system requirements --including the need for both more RAM and larger hard disks. While these costs will be an issue for organizations converting large inventories of PCS to Windows Audio-CASI today, this will not be a serious constraint for organizations and individuals with small inventories of machines to upgrade or those purchasing new machines today. PMID:22081743

  6. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    PubMed

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  7. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  8. The challenges of archiving networked-based multimedia performances (Performance cryogenics)

    NASA Astrophysics Data System (ADS)

    Cohen, Elizabeth; Cooperstock, Jeremy; Kyriakakis, Chris

    2002-11-01

    Music archives and libraries have cultural preservation at the core of their charters. New forms of art often race ahead of the preservation infrastructure. The ability to stream multiple synchronized ultra-low latency streams of audio and video across a continent for a distributed interactive performance such as music and dance with high-definition video and multichannel audio raises a series of challenges for the architects of digital libraries and those responsible for cultural preservation. The archiving of such performances presents numerous challenges that go beyond simply recording each stream. Case studies of storage and subsequent retrieval issues for Internet2 collaborative performances are discussed. The development of shared reality and immersive environments generate issues about, What constitutes an archived performance that occurs across a network (in multiple spaces over time)? What are the families of necessary metadata to reconstruct this virtual world in another venue or era? For example, if the network exhibited changes in latency the performers most likely adapted. In a future recreation, the latency will most likely be completely different. We discuss the parameters of immersive environment acquisition and rendering, network architectures, software architecture, musical/choreographic scores, and environmental acoustics that must be considered to address this problem.

  9. Strategies for Transporting Data Between Classified and Unclassified Networks

    DTIC Science & Technology

    2016-03-01

    datagram protocol (UDP) must be used. The UDP is typically used when speed is a higher priority than data integrity, such as in music or video streaming ...and the exit point of data are separate and can be tightly controlled. This does effectively prevent the comingling of data and is used in industry to...perform functions such as streaming video and audio from secure to insecure networks (ref. 1). A second disadvantage lies in the fact that the

  10. Reduction in time-to-sleep through EEG based brain state detection and audio stimulation.

    PubMed

    Zhuo Zhang; Cuntai Guan; Ti Eu Chan; Juanhong Yu; Aung Aung Phyo Wai; Chuanchu Wang; Haihong Zhang

    2015-08-01

    We developed an EEG- and audio-based sleep sensing and enhancing system, called iSleep (interactive Sleep enhancement apparatus). The system adopts a closed-loop approach which optimizes the audio recording selection based on user's sleep status detected through our online EEG computing algorithm. The iSleep prototype comprises two major parts: 1) a sleeping mask integrated with a single channel EEG electrode and amplifier, a pair of stereo earphones and a microcontroller with wireless circuit for control and data streaming; 2) a mobile app to receive EEG signals for online sleep monitoring and audio playback control. In this study we attempt to validate our hypothesis that appropriate audio stimulation in relation to brain state can induce faster onset of sleep and improve the quality of a nap. We conduct experiments on 28 healthy subjects, each undergoing two nap sessions - one with a quiet background and one with our audio-stimulation. We compare the time-to-sleep in both sessions between two groups of subjects, e.g., fast and slow sleep onset groups. The p-value obtained from Wilcoxon Signed Rank Test is 1.22e-04 for slow onset group, which demonstrates that iSleep can significantly reduce the time-to-sleep for people with difficulty in falling sleep.

  11. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.

  12. 37 CFR 383.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 383.2 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS... make digital audio transmissions as part of a Service (as defined in paragraph (h) of this section...) The audio channels are delivered by digital audio transmissions through a technology that is incapable...

  13. 37 CFR 383.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 383.2 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS... make digital audio transmissions as part of a Service (as defined in paragraph (h) of this section...) The audio channels are delivered by digital audio transmissions through a technology that is incapable...

  14. 37 CFR 383.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 383.2 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS... make digital audio transmissions as part of a Service (as defined in paragraph (h) of this section...) The audio channels are delivered by digital audio transmissions through a technology that is incapable...

  15. 37 CFR 383.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 383.2 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS... make digital audio transmissions as part of a Service (as defined in paragraph (h) of this section...) The audio channels are delivered by digital audio transmissions through a technology that is incapable...

  16. High performance MPEG-audio decoder IC

    NASA Technical Reports Server (NTRS)

    Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.

    1993-01-01

    The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.

  17. Determination of the duty cycle of WLAN for realistic radio frequency electromagnetic field exposure assessment.

    PubMed

    Joseph, Wout; Pareit, Daan; Vermeeren, Günter; Naudts, Dries; Verloock, Leen; Martens, Luc; Moerman, Ingrid

    2013-01-01

    Wireless Local Area Networks (WLANs) are commonly deployed in various environments. The WLAN data packets are not transmitted continuously but often worst-case exposure of WLAN is assessed, assuming 100% activity and leading to huge overestimations. Actual duty cycles of WLAN are thus of importance for time-averaging of exposure when checking compliance with international guidelines on limiting adverse health effects. In this paper, duty cycles of WLAN using Wi-Fi technology are determined for exposure assessment on large scale at 179 locations for different environments and activities (file transfer, video streaming, audio, surfing on the internet, etc.). The median duty cycle equals 1.4% and the 95th percentile is 10.4% (standard deviation SD = 6.4%). Largest duty cycles are observed in urban and industrial environments. For actual applications, the theoretical upper limit for the WLAN duty cycle is 69.8% and 94.7% for maximum and minimum physical data rate, respectively. For lower data rates, higher duty cycles will occur. Although counterintuitive at first sight, poor WLAN connections result in higher possible exposures. File transfer at maximum data rate results in median duty cycles of 47.6% (SD = 16%), while it results in median values of 91.5% (SD = 18%) at minimum data rate. Surfing and audio streaming are less intensively using the wireless medium and therefore have median duty cycles lower than 3.2% (SD = 0.5-7.5%). For a specific example, overestimations up to a factor 8 for electric fields occur, when considering 100% activity compared to realistic duty cycles. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    NASA Astrophysics Data System (ADS)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.

  19. Promoting Early Literacy for Diverse Learners Using Audio and Video Technology

    ERIC Educational Resources Information Center

    Skouge, James R.; Rao, Kavita; Boisvert, Precille C.

    2007-01-01

    Practical applications of multimedia technologies that support early literacy are described and evaluated, including several variations of recorded books and stories, utilizing mainstream audio and video recording appropriate for libraries and schools. Special emphasis is given to the needs of children with disabilities and children who are…

  20. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  1. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  2. Audio Teleconferencing: Low Cost Technology for External Studies Networking.

    ERIC Educational Resources Information Center

    Robertson, Bill

    1987-01-01

    This discussion of the benefits of audio teleconferencing for distance education programs and for business and government applications focuses on the recent experience of Canadian educational users. Four successful operating models and their costs are reviewed, and it is concluded that audio teleconferencing is cost efficient and educationally…

  3. Digital Audio Sampling for Film and Video.

    ERIC Educational Resources Information Center

    Stanton, Michael J.

    Digital audio sampling is explained, and some of its implications in digital sound applications are discussed. Digital sound equipment is rapidly replacing analog recording devices as the state-of-the-art in audio technology. The philosophy of digital recording involves doing away with the continuously variable analog waveforms and turning the…

  4. The Sweet-Home project: audio processing and decision making in smart home to improve well-being and reliance.

    PubMed

    Vacher, Michel; Chahuara, Pedro; Lecouteux, Benjamin; Istrate, Dan; Portet, Francois; Joubert, Thierry; Sehili, Mohamed; Meillon, Brigitte; Bonnefond, Nicolas; Fabre, Sébastien; Roux, Camille; Caffiau, Sybille

    2013-01-01

    The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the implemented techniques for speech and sound recognition as context-aware decision making with uncertainty. A user experiment in a smart home demonstrates the interest of this audio-based technology.

  5. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  6. Newly available technologies present expanding opportunities for scientific and technical information exchange

    NASA Astrophysics Data System (ADS)

    Tolzman, Jean M.

    1993-03-01

    The potential for expanded communication among researchers, scholars, and students is supported by growth in the capabilities for electronic communication as well as expanding access to various forms of electronic interchange and computing capabilities. Research supported by the National Aeronautics and Space Administration points to a future where workstations with audio and video monitors and screen-sharing protocols are used to support collaborations with colleagues located throughout the world. Instruments and sensors all over the world will produce data streams that will be brought together and analyzed to produce new findings, which in turn can be distributed electronically. New forms of electronic journals will emerge and provide opportunities for researchers and scientists to electronically and interactively exchange information in a wide range of structures and formats. Ultimately, the wide-scale use of these technologies in the dissemination of research results and the stimulation of collegial dialogue will change the way we represent and express our knowledge of the world. A new paradigm will evolve-perhaps a truly worldwide 'invisible college'.

  7. Newly available technologies present expanding opportunities for scientific and technical information exchange

    NASA Technical Reports Server (NTRS)

    Tolzman, Jean M.

    1993-01-01

    The potential for expanded communication among researchers, scholars, and students is supported by growth in the capabilities for electronic communication as well as expanding access to various forms of electronic interchange and computing capabilities. Research supported by the National Aeronautics and Space Administration points to a future where workstations with audio and video monitors and screen-sharing protocols are used to support collaborations with colleagues located throughout the world. Instruments and sensors all over the world will produce data streams that will be brought together and analyzed to produce new findings, which in turn can be distributed electronically. New forms of electronic journals will emerge and provide opportunities for researchers and scientists to electronically and interactively exchange information in a wide range of structures and formats. Ultimately, the wide-scale use of these technologies in the dissemination of research results and the stimulation of collegial dialogue will change the way we represent and express our knowledge of the world. A new paradigm will evolve-perhaps a truly worldwide 'invisible college'.

  8. MWAHCA: a multimedia wireless ad hoc cluster architecture.

    PubMed

    Diaz, Juan R; Lloret, Jaime; Jimenez, Jose M; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal.

  9. Quality models for audiovisual streaming

    NASA Astrophysics Data System (ADS)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  10. Audio-Enhanced Technology Strengthens Community Building in the Online Classroom

    ERIC Educational Resources Information Center

    Weber, Michele C.; Dereshiwsky, Mary

    2013-01-01

    The purpose of this phenomenological study was to explore the lived experiences of students in an audio-enhanced online classroom. Online students who had participated in such a classroom experience were interviewed. The interviews were analyzed to explain the students' experiences with the technology online and show how each student perceived the…

  11. 37 CFR 201.28 - Statements of Account for digital audio recording devices or media.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... following information for each unique combination of product category, technology, series or model number... quarterly period covered by the statement. (9) Technology of a device or medium is a digital audio recording... Congress. Forms and other information may be requested from the Licensing Division by facsimile...

  12. Sub-Audio Magnetics: Miniature Sensor Technology for Simultaneous Magnetic and Electromagnetic Detection of UXO

    DTIC Science & Technology

    2010-07-01

    is comprised of 4 x 40 m lengths of braided copper wire (Figure 29) with a diameter of 15 mm, capable of passing a 500 amp current. In normal...fuel tank and rubber hoses . Sub-Audio Magnetics: Technology for Simultaneous Magnetic and Electromagnetic Detection 77 Figure 31 Quad

  13. 77 FR 42764 - Distribution of the 2005, 2006, 2007 and 2008 Digital Audio Recording Technology Royalty Funds...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-20

    ... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2010-8 CRB DD 2005-2008 (MW)] Distribution of the 2005, 2006, 2007 and 2008 Digital Audio Recording Technology Royalty Funds for the Musical Works Funds AGENCY: Copyright Royalty Board, Library of Congress. ACTION: Notice announcing commencement...

  14. The future of acoustics distance education at Penn State

    NASA Astrophysics Data System (ADS)

    Brooks, Karen P.; Sparrow, Victor W.; Atchley, Anthony A.

    2005-04-01

    For nearly 20 years Penn State's Graduate Program in Acoustics has offered a graduate distance education program, established in response to Department of Defense needs. Using satellite technology, courses provided synchronous classes incorporating one-way video and two-way audio. Advancements in technology allowed more sophisticated delivery systems to be considered and courses to be offered to employees of industry. Current technology utilizes real time video-streaming and archived lectures to enable individuals anywhere to access course materials. The evolution of technology, expansion of the geographic market and changing needs of the student, among other issues, require a new paradigm. This paradigm must consider issues such as faculty acceptance and questions facing all institutions with regard to blurring the distinction between residence and distance education. Who will be the students? What will be the purpose of education? Will it be to provide professional and/or research degrees? How will the Acoustics Program ensure it remains attractive to all students, while working within the boundaries and constraints of a major research university? This is a look at current practice and issues with an emphasis on those relevant to constructing the Acoustics Programs distance education strategy for the future.

  15. Digital Advances in Contemporary Audio Production.

    ERIC Educational Resources Information Center

    Shields, Steven O.

    Noting that a revolution in sonic high fidelity occurred during the 1980s as digital-based audio production methods began to replace traditional analog modes, this paper offers both an overview of digital audio theory and descriptions of some of the related digital production technologies that have begun to emerge from the mating of the computer…

  16. Design of batch audio/video conversion platform based on JavaEE

    NASA Astrophysics Data System (ADS)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  17. Audio Watermark Embedding Technique Applying Auditory Stream Segregation: "G-encoder Mark" Able to Be Extracted by Mobile Phone

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.

  18. Fuzzy Logic-Based Audio Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, M.

    2008-11-01

    Audio and audio-pattern recognition is becoming one of the most important technologies to automatically control embedded systems. Fuzzy logic may be the most important enabling methodology due to its ability to rapidly and economically model such application. An audio and audio-pattern recognition engine based on fuzzy logic has been developed for use in very low-cost and deeply embedded systems to automate human-to-machine and machine-to-machine interaction. This engine consists of simple digital signal-processing algorithms for feature extraction and normalization, and a set of pattern-recognition rules manually tuned or automatically tuned by a self-learning process.

  19. "Listen to This!" Utilizing Audio Recordings to Improve Instructor Feedback on Writing in Mathematics

    ERIC Educational Resources Information Center

    Weld, Christopher

    2014-01-01

    Providing audio files in lieu of written remarks on graded assignments is arguably a more effective means of feedback, allowing students to better process and understand the critique and improve their future work. With emerging technologies and software, this audio feedback alternative to the traditional paradigm of providing written comments…

  20. Incentive Mechanisms for Peer-to-Peer Streaming

    ERIC Educational Resources Information Center

    Pai, Vinay

    2011-01-01

    The increasing popularity of high-bandwidth Internet connections has enabled new applications like the online delivery of high-quality audio and video content. Conventional server-client approaches place the entire burden of delivery on the content provider's server, making these services expensive to provide. A peer-to-peer approach allows end…

  1. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  2. 78 FR 31800 - Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-24

    ...] Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video Description... should be the obligation of the apparatus manufacturer, under section 203, to ensure that the devices are... secondary audio stream on all equipment, including older equipment. In the absence of an industry solution...

  3. ATLAS Live: Collaborative Information Streams

    NASA Astrophysics Data System (ADS)

    Goldfarb, Steven; ATLAS Collaboration

    2011-12-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  4. Stochastic Packet Loss Model to Evaluate QoE Impairments

    NASA Astrophysics Data System (ADS)

    Hohlfeld, Oliver

    With provisioning of broadband access for mass market—even in wireless and mobile networks—multimedia content, especially real-time streaming of high-quality audio and video, is extensively viewed and exchanged over the Internet. Quality of Experience (QoE) aspects, describing the service quality perceived by the user, is a vital factor in ensuring customer satisfaction in today's communication networks. Frameworks for accessing quality degradations in streamed video currently are investigated as a complex multi-layered research topic, involving network traffic load, codec functions and measures of user perception of video quality.

  5. Making the Most of Audio. Technology in Language Learning Series.

    ERIC Educational Resources Information Center

    Barley, Anthony

    Prepared for practicing language teachers, this book's aim is to help them make the most of audio, a readily accessible resource. The book shows, with the help of numerous practical examples, how a range of language skills can be developed. Most examples are in French. Chapters cover the following information: (1) making the most of audio (e.g.,…

  6. Reaching Out: The Role of Audio Cassette Communication in Rural Development. Occasional Paper 19.

    ERIC Educational Resources Information Center

    Adhikarya, Ronny; Colle, Royal D.

    This report describes the state-of-the-art of audio cassette technology (ACT) and reports findings from field tests, case studies, and pilot projects in several countries which demonstrate the potential of audio cassettes as a medium for communicating with rural people. Specific guidance is also offered on how a project can use cassettes as a…

  7. Reasons to Rethink the Use of Audio and Video Lectures in Online Courses

    ERIC Educational Resources Information Center

    Stetz, Thomas A.; Bauman, Antonina A.

    2013-01-01

    Recent technological developments allow any instructor to create audio and video lectures for the use in online classes. However, it is questionable if it is worth the time and effort that faculty put into preparing those lectures. This paper presents thirteen factors that should be considered before preparing and using audio and video lectures in…

  8. Sounding Out Science: Incorporating Audio Technology to Assist Students with Learning Differences in Science Education

    NASA Astrophysics Data System (ADS)

    Gomes, Clement V.

    With the current focus to have all students reach scientific literacy in the U.S, there exists a need to support marginalized students, such as those with Learning Disabilities/Differences (LD), to reach the same educational goals as their mainstream counterparts. This dissertation examines the benefits of using audio assistive technology on the iPad to support LD students to achieve comprehension of science vocabulary and semantics. This dissertation is composed of two papers, both of which include qualitative information supported by quantified data. The first paper, titled Using Technology to Overcome Fundamental Literacy Constraints for Students with Learning Differences to Achieve Scientific Literacy, provides quantified evidence from pretest and posttest analysis that audio technology can be beneficial for seventh grade LD students when learning new and unfamiliar science content. Analysis of observations and student interviews support the findings. The second paper, titled Time, Energy, and Motivation: Utilizing Technology to Ease Science Understanding for Students with Learning Differences, supports the importance of creating technology that is clear, audible, and easy for students to use so they benefit and desire to utilize the learning tool. Multiple correlation of Likert Survey analysis was used to identify four major items and was supported with analysis from observations of and interviews with students, parents, and educators. This study provides useful information to support the rising number of identified LD students and their parents and teachers by presenting the benefits of using audio assistive technology to learn science.

  9. Understanding the Effect of Audio Communication Delay on Distributed Team Interaction

    DTIC Science & Technology

    2013-06-01

    means for members to socialize and learn about each other, engenders development cooperative relationships, and lays a foundation for future interaction...length will result in increases in task completion time and mental workload. 3. Audiovisual technology will moderate the effect of communication...than audio alone. 4. Audiovisual technology will moderate the effect of communication delays such that task completion time and mental workload will

  10. VIDAC; A New Technology for Increasing the Effectiveness of Television Distribution Networks: Report on a Feasibility Study of a Central Library "Integrated Media" Satellite Delivery System.

    ERIC Educational Resources Information Center

    Diambra, Henry M.; And Others

    VIDAC (Video Audio Compressed), a new technology based upon non-real-time transmission of audiovisual information via conventional television systems, has been invented by the Westinghouse Electric Corporation. This system permits time compression, during storage and transmission of the audio component of a still visual-narrative audio…

  11. Audio-Visual Media and New Technologies at the Service of Distance Education. Programme on Learner Use of Media Paper No. 16.

    ERIC Educational Resources Information Center

    Kirkwood, Adrian

    The first of two papers in this report, "The Present and the Future of Audio-Visual Production Centres in Distance Universities," describes changes in the Open University in Great Britain. The Open University's use of television and audio materials are increasingly being distributed to students on cassette. Although transmission is still…

  12. Online Distance Teaching of Undergraduate Finance: A Case for Musashi University and Konan University, Japan

    ERIC Educational Resources Information Center

    Kubota, Keiichi; Fujikawa, Kiyoshi

    2007-01-01

    We implemented a synchronous distance course entitled: Introductory Finance designed for undergraduate students. This course was held between two Japanese universities. Stable Internet connections allowing minimum delay and minimum interruptions of the audio-video streaming signals were used. Students were equipped with their own PCs with…

  13. 78 FR 77074 - Accessibility of User Interfaces, and Video Programming Guides and Menus; Accessible Emergency...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-20

    ... Apparatus Requirements for Emergency Information and Video Description: Implementation of the Twenty- First... of apparatus covered by the CVAA to provide access to the secondary audio stream used for audible... availability of accessible equipment and, if so, what those notification requirements should be. The Commission...

  14. MWAHCA: A Multimedia Wireless Ad Hoc Cluster Architecture

    PubMed Central

    Diaz, Juan R.; Jimenez, Jose M.; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal. PMID:24737996

  15. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  16. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  17. Multimedia content description framework

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Mohan, Rakesh (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor); Kim, Michelle Yoonk Yung (Inventor)

    2003-01-01

    A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.

  18. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  19. Ocean Instruments Web Site for Undergraduate, Secondary and Informal Education

    NASA Astrophysics Data System (ADS)

    Farrington, J. W.; Nevala, A.; Dolby, L. A.

    2004-12-01

    An Ocean Instruments web site has been developed that makes available information about ocean sampling and measurement instruments and platforms. The site features text, pictures, diagrams and background information written or edited by experts in ocean science and engineering and contains links to glossaries and multimedia technologies including video streaming, audio packages, and searchable databases. The site was developed after advisory meetings with selected professors teaching undergraduate classes who responded to the question, what could Woods Hole Oceanographic Institution supply to enhance undergraduate education in ocean sciences, life sciences, and geosciences? Prototypes were developed and tested with students, potential users, and potential contributors. The site is hosted by WHOI. The initial five instruments featured were provided by four WHOI scientists and engineers and by one Sea Education Association faculty member. The site is now open to contributions from scientists and engineers worldwide. The site will not advertise or promote the use of individual ocean instruments.

  20. Telestroke ambulances in prehospital stroke management: concept and pilot feasibility study.

    PubMed

    Liman, Thomas G; Winter, Benjamin; Waldschmidt, Carolin; Zerbe, Norman; Hufnagl, Peter; Audebert, Heinrich J; Endres, Matthias

    2012-08-01

    Pre- and intrahospital time delays are major concerns in acute stroke care. Telemedicine-equipped ambulances may improve time management and identify patients with stroke eligible for thrombolysis by an early prehospital stroke diagnosis. The aims of this study were (1) to develop a telestroke ambulance prototype; (2) to test the reliability of stroke severity assessment; and (3) to evaluate its feasibility in the prehospital emergency setting. Mobil, real-time audio-video streaming telemedicine devices were implemented into advanced life support ambulances. Feasibility of telestroke ambulances and reliability of the National Institutes of Health Stroke Scale assessment were tested using current wireless cellular communication technology (third generation) in a prehospital stroke scenario. Two stroke actors were trained in simulation of differing right and left middle cerebral artery stroke syndromes. National Institutes of Health Stroke Scale assessment was performed by a hospital-based stroke physician by telemedicine, by an emergency physician guided by telemedicine, and "a posteriori" on the basis of video documentation. In 18 of 30 scenarios, National Institutes of Health Stroke Scale assessment could not be performed due to absence or loss of audio-video signal. In the remaining 12 completed scenarios, interrater agreement of National Institutes of Health Stroke Scale examination between ambulance and hospital and ambulance and "a posteriori" video evaluation was moderate to good with weighted κ values of 0.69 (95% CI, 0.51-0.87) and 0.79 (95% CI, 0.59-0.98), respectively. Prehospital telestroke examination was not at an acceptable level for clinical use, at least on the basis of the used technology. Further technical development is needed before telestroke is applicable for prehospital stroke management during patient transport.

  1. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  2. Design and Uses of an Audio/Video Streaming System for Students with Disabilities

    ERIC Educational Resources Information Center

    Hogan, Bryan J.

    2004-01-01

    Within most educational institutes there are a substantial number of students with varying physical and mental disabilities. These might range from difficulty in reading to difficulty in attending the institute. Whatever their disability, it places a barrier between them and their education. In the past few years there have been rapid and striking…

  3. The Real Who, What, When, and Why of Journalism

    ERIC Educational Resources Information Center

    Huber-Humes, Sonya

    2007-01-01

    Journalism programs across the country have rolled out new curricula and courses emphasizing complex social issues, in-depth reporting, and "new media" such as online news sites with streaming audio and video. Journalism education has rightly taken its cue from media outlets that find themselves not relevant enough for a new generation of readers,…

  4. Teletoxicology: Patient Assessment Using Wearable Audiovisual Streaming Technology.

    PubMed

    Skolnik, Aaron B; Chai, Peter R; Dameff, Christian; Gerkin, Richard; Monas, Jessica; Padilla-Jones, Angela; Curry, Steven

    2016-12-01

    Audiovisual streaming technologies allow detailed remote patient assessment and have been suggested to change management and enhance triage. The advent of wearable, head-mounted devices (HMDs) permits advanced teletoxicology at a relatively low cost. A previously published pilot study supports the feasibility of using the HMD Google Glass® (Google Inc.; Mountain View, CA) for teletoxicology consultation. This study examines the reliability, accuracy, and precision of the poisoned patient assessment when performed remotely via Google Glass®. A prospective observational cohort study was performed on 50 patients admitted to a tertiary care center inpatient toxicology service. Toxicology fellows wore Google Glass® and transmitted secure, real-time video and audio of the initial physical examination to a remote investigator not involved in the subject's care. High-resolution still photos of electrocardiograms (ECGs) were transmitted to the remote investigator. On-site and remote investigators recorded physical examination findings and ECG interpretation. Both investigators completed a brief survey about the acceptability and reliability of the streaming technology for each encounter. Kappa scores and simple agreement were calculated for each examination finding and electrocardiogram parameter. Reliability scores and reliability difference were calculated and compared for each encounter. Data were available for analysis of 17 categories of examination and ECG findings. Simple agreement between on-site and remote investigators ranged from 68 to 100 % (median = 94 %, IQR = 10.5). Kappa scores could be calculated for 11/17 parameters and demonstrated slight to fair agreement for two parameters and moderate to almost perfect agreement for nine parameters (median = 0.653; substantial agreement). The lowest Kappa scores were for pupil size and response to light. On a 100-mm visual analog scale (VAS), mean comfort level was 93 and mean reliability rating was 89 for on-site investigators. For remote users, the mean comfort and reliability ratings were 99 and 86, respectively. The average difference in reliability scores between on-site and remote investigators was 2.6, with the difference increasing as reliability scores decreased. Remote evaluation of poisoned patients via Google Glass® is possible with a high degree of agreement on examination findings and ECG interpretation. Evaluation of pupil size and response to light is limited, likely by the quality of streaming video. Users of Google Glass® for teletoxicology reported high levels of comfort with the technology and found it reliable, though as reported reliability decreased, remote users were most affected. Further study should compare patient-centered outcomes when using HMDs for consultation to those resulting from telephone consultation.

  5. 37 CFR 383.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 383.2 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS... digital audio transmissions as part of a Service (as defined in paragraph (h) of this section), and... delivered by digital audio transmissions through a technology that is incapable of tracking the individual...

  6. Flexible server architecture for resource-optimal presentation of Internet multimedia streams to the client

    NASA Astrophysics Data System (ADS)

    Boenisch, Holger; Froitzheim, Konrad

    1999-12-01

    The transfer of live media streams such as video and audio over the Internet is subject to several problems, static and dynamic by nature. Important quality of service (QoS) parameters do not only differ between various receivers depending on their network access, service provider, and nationality, the QoS is also variable in time. Moreover the installed receiver base is heterogeneous with respect to operating system, browser or client software, and browser version. We present a new concept for serving live media streams. It is not longer based on the current one-size-fits all paradigm, where the server offers just one stream. Our compresslet system takes the opposite approach: it builds media streams `to order' and `just in time'. Every client subscribing to a media stream uses a servlet loaded into the media server to generate a tailored data stream for his resources and constraints. The server is designed such that commonly used components for media streams are computed once. The compresslets use these prefabricated components, code additional data if necessary, and construct the data stream based on the dynamic available QoS and other client constraints. A client-specific encoding leads to resource- optimal presentation that is especially useful for the presentation of complex multimedia documents on a variety of output devices.

  7. International Meeting To Discuss Audio Technology as Applied to Library Services for Blind Individuals (3rd, Toronto, Ontario, Canada, April 20-22, 1995). Volumes 1-3.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.

    This three-day conference on the subject of audio technology for the production of materials for the blind, takes the court reporter approach to recording the speeches and discussions of the meeting. The result is a three volume set of complete transcripts, one volume for each day of the meeting, but continuous in form. The highlights of each…

  8. Recording Technologies: Sights & Sounds. Resources in Technology.

    ERIC Educational Resources Information Center

    Deal, Walter F., III

    1994-01-01

    Provides information on recording technologies such as laser disks, audio and videotape, and video cameras. Presents a design brief that includes objectives, student outcomes, and a student quiz. (JOW)

  9. The many facets of auditory display

    NASA Technical Reports Server (NTRS)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  10. [Evolution of the audio-visual technologies of production and diffusion and the conditions of their application in the Third World].

    PubMed

    Lefebvre, M

    1979-01-01

    The present information production techniques are so inefficient that it is out of the question to generalize them. On the other hand audio-visual communication raises a major political problem, especially for developing countries. Audio-visual equipment has gone through adjustment phases; the example of the tape and cassette recorder is given: 2 technological improvements have completely modified its use; the transistors have allowed considerable reduction in volume and weight as well as the energy necessary; the invention of the cassette has simplified its use. Technological research is following 3 major directions: the production of equipment which consumes little energy; the improvement of electronic component production techniques (towards cheaper electronic components); finally, the designing of systems allowing to stock large quantities of information. The communication systems will probably make so much progress in the areas of technology and programming, that they will soon have very different uses than the present ones. The question is whether our civilizations will let themselves be dominated by these new systems, or whether they will succeed to turn them into progress tools.

  11. Exploratory Evaluation of Audio Email Technology in Formative Assessment Feedback

    ERIC Educational Resources Information Center

    Macgregor, George; Spiers, Alex; Taylor, Chris

    2011-01-01

    Formative assessment generates feedback on students' performance, thereby accelerating and improving student learning. Anecdotal evidence gathered by a number of evaluations has hypothesised that audio feedback may be capable of enhancing student learning more than other approaches. In this paper we report on the preliminary findings of a…

  12. Audio Visual Technology and the Teaching of Foreign Languages.

    ERIC Educational Resources Information Center

    Halbig, Michael C.

    Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…

  13. Direct broadcast satellite-radio market, legal, regulatory, and business considerations

    NASA Technical Reports Server (NTRS)

    Sood, Des R.

    1991-01-01

    A Direct Broadcast Satellite-Radio (DBS-R) System offers the prospect of delivering high quality audio broadcasts to large audiences at costs lower than or comparable to those incurred using the current means of broadcasting. The maturation of mobile communications technologies, and advances in microelectronics and digital signal processing now make it possible to bring this technology to the marketplace. Heightened consumer interest in improved audio quality coupled with the technological and economic feasibility of meeting this demand via DBS-R make it opportune to start planning for implementation of DBS-R Systems. NASA-Lewis and the Voice of America as part of their on-going efforts to improve the quality of international audio broadcasts, have undertaken a number of tasks to more clearly define the technical, marketing, organizational, legal, and regulatory issues underlying implementation of DBS-R Systems. The results and an assessment is presented of the business considerations underlying the construction, launch, and operation of DBS-R Systems.

  14. Direct broadcast satellite-radio market, legal, regulatory, and business considerations

    NASA Astrophysics Data System (ADS)

    Sood, Des R.

    1991-03-01

    A Direct Broadcast Satellite-Radio (DBS-R) System offers the prospect of delivering high quality audio broadcasts to large audiences at costs lower than or comparable to those incurred using the current means of broadcasting. The maturation of mobile communications technologies, and advances in microelectronics and digital signal processing now make it possible to bring this technology to the marketplace. Heightened consumer interest in improved audio quality coupled with the technological and economic feasibility of meeting this demand via DBS-R make it opportune to start planning for implementation of DBS-R Systems. NASA-Lewis and the Voice of America as part of their on-going efforts to improve the quality of international audio broadcasts, have undertaken a number of tasks to more clearly define the technical, marketing, organizational, legal, and regulatory issues underlying implementation of DBS-R Systems. The results and an assessment is presented of the business considerations underlying the construction, launch, and operation of DBS-R Systems.

  15. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  16. New Literacy Tools for Adults.

    ERIC Educational Resources Information Center

    Anderson, Jonathan

    1990-01-01

    Describes an Australian national study of technologies used for adult literacy: traditional technologies (print, radio, television, audio and videotape, teleconferencing, and computers) and new generation technologies (laser discs, CD-ROM, videodiscs, and hypermedia). (SK)

  17. Content-based intermedia synchronization

    NASA Astrophysics Data System (ADS)

    Oh, Dong-Young; Sampath-Kumar, Srihari; Rangan, P. Venkat

    1995-03-01

    Inter-media synchronization methods developed until now have been based on syntactic timestamping of video frames and audio samples. These methods are not fully appropriate for the synchronization of multimedia objects which may have to be accessed individually by their contents, e.g. content-base data retrieval. We propose a content-based multimedia synchronization scheme in which a media stream is viewed as hierarchial composition of smaller objects which are logically structured based on the contents, and the synchronization is achieved by deriving temporal relations among logical units of media object. content-based synchronization offers several advantages such as, elimination of the need for time stamping, freedom from limitations of jitter, synchronization of independently captured media objects in video editing, and compensation for inherent asynchronies in capture times of video and audio.

  18. Research Notes. OERI's Regional Laboratory Technology Efforts.

    ERIC Educational Resources Information Center

    Garnette, Cheryl P., Ed.; Withrow, Frank B., Ed.

    1989-01-01

    Examines various educational technology projects that regional laboratories supported by the Office of Educational Research and Improvement (OERI) are undertaking. Highlights include innovative uses of instructional technology; tele-teaching using interactive audio conferencing; making informed decisions about technology; national teleconferences…

  19. Technical Considerations in the Delivery of Audio-Visual Course Content.

    ERIC Educational Resources Information Center

    Lightfoot, Jay M.

    2002-01-01

    In an attempt to provide students with the benefit of the latest technology, some instructors include multimedia content on their class Web sites. This article introduces the basic terms and concepts needed to understand the multimedia domain. Provides a brief tutorial designed to help instructors create good, consistent audio-visual content. (AEF)

  20. FIRRE command and control station (C2)

    NASA Astrophysics Data System (ADS)

    Laird, R. T.; Kramer, T. A.; Cruickshanks, J. R.; Curd, K. M.; Thomas, K. M.; Moneyhun, J.

    2006-05-01

    The Family of Integrated Rapid Response Equipment (FIRRE) is an advanced technology demonstration program intended to develop a family of affordable, scalable, modular, and logistically supportable unmanned systems to meet urgent operational force protection needs and requirements worldwide. The near-term goal is to provide the best available unmanned ground systems to the warfighter in Iraq and Afghanistan. The overarching long-term goal is to develop a fully-integrated, layered force protection system of systems for our forward deployed forces that is networked with the future force C4ISR systems architecture. The intent of the FIRRE program is to reduce manpower requirements, enhance force protection capabilities, and reduce casualties through the use of unmanned systems. FIRRE is sponsored by the Office of the Under Secretary of Defense, Acquisitions, Technology and Logistics (OUSD AT&L), and is managed by the Product Manager, Force Protection Systems (PM-FPS). The FIRRE Command and Control (C2) Station supports two operators, hosts the Joint Battlespace Command and Control Software for Manned and Unmanned Assets (JBC2S), and will be able to host Mission Planning and Rehearsal (MPR) software. The C2 Station consists of an M1152 HMMWV fitted with an S-788 TYPE I shelter. The C2 Station employs five 24" LCD monitors for display of JBC2S software [1], MPR software, and live video feeds from unmanned systems. An audio distribution system allows each operator to select between various audio sources including: AN/PRC-117F tactical radio (SINCGARS compatible), audio prompts from JBC2S software, audio from unmanned systems, audio from other operators, and audio from external sources such as an intercom in an adjacent Tactical Operations Center (TOC). A power distribution system provides battery backup for momentary outages. The Ethernet network, audio distribution system, and audio/video feeds are available for use outside the C2 Station.

  1. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less

  2. The HomePlanet project: a HAVi multi-media network over POF

    NASA Astrophysics Data System (ADS)

    Roycroft, Brendan; Corbett, Brian; Kelleher, Carmel; Lambkin, John; Bareel, Baudouin; Goudeau, Jacques; Skiczuk, Peter

    2005-06-01

    This project has developed a low cost in-home network compatible with network standard IEEE1394b. We have developed all components of the network, from the red resonant cavity LEDs and VCSELs as light sources, the driver circuitry, plastic optical fibres for transmission, up to the network management software. We demonstrate plug-and-play operation of S100 and S200 (125 and 250Mbps) data streams using 650nm RCLEDs, and S400 (500 Mbps) data streams using VCSELs. The network software incorporates Home Audio Video interoperability (HAVi), which allows any HAVi device to be hot-plugged into the network and be instantly recognised and controllable over the network.

  3. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  4. Reconsidering the Role of Recorded Audio as a Rich, Flexible and Engaging Learning Space

    ERIC Educational Resources Information Center

    Middleton, Andrew

    2016-01-01

    Audio needs to be recognised as an integral medium capable of extending education's formal and informal, virtual and physical learning spaces. This paper reconsiders the value of educational podcasting through a review of literature and a module case study. It argues that a pedagogical understanding is needed and challenges technology-centred or…

  5. Classroom Audio Distribution in the Postsecondary Setting: A Story of Universal Design for Learning

    ERIC Educational Resources Information Center

    Flagg-Williams, Joan B.; Bokhorst-Heng, Wendy D.

    2016-01-01

    Classroom Audio Distribution Systems (CADS) consist of amplification technology that enhances the teacher's, or sometimes the student's, vocal signal above the background noise in a classroom. Much research has supported the benefits of CADS for student learning, but most of it has focused on elementary school classrooms. This study investigated…

  6. Introduction to Human Services, Chapter III. Video Script Package, Text, and Audio Script Package.

    ERIC Educational Resources Information Center

    Miami-Dade Community Coll., FL.

    Video, textual, and audio components of the third module of a multi-media, introductory course on Human Services are presented. The module packages, developed at Miami-Dade Community College, deal with technology, social change, and problem dependencies. A video cassette script is first provided that explores the "traditional,""inner," and "other…

  7. Tele-auscultation support system with mixed reality navigation.

    PubMed

    Hori, Kenta; Uchida, Yusuke; Kan, Tsukasa; Minami, Maya; Naito, Chisako; Kuroda, Tomohiro; Takahashi, Hideya; Ando, Masahiko; Kawamura, Takashi; Kume, Naoto; Okamoto, Kazuya; Takemura, Tadamasa; Yoshihara, Hiroyuki

    2013-01-01

    The aim of this research is to develop an information support system for tele-auscultation. In auscultation, a doctor requires to understand condition of applying a stethoscope, in addition to auscultatory sounds. The proposed system includes intuitive navigation system of stethoscope operation, in addition to conventional audio streaming system of auscultatory sounds and conventional video conferencing system for telecommunication. Mixed reality technology is applied for intuitive navigation of the stethoscope. Information, such as position, contact condition and breath, is overlaid on a view of the patient's chest. The contact condition of the stethoscope is measured by e-textile contact sensors. The breath is measured by a band type breath sensor. In a simulated tele-auscultation experiment, the stethoscope with the contact sensors and the breath sensor were evaluated. The results show that the presentation of the contact condition was not understandable enough for navigating the stethoscope handling. The time series of the breath phases was usable for the remote doctor to understand the breath condition of the patient.

  8. Learning Across Senses: Cross-Modal Effects in Multisensory Statistical Learning

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms. PMID:21574745

  9. Audio-visual communication and its use in palliative care.

    PubMed

    Coyle, Nessa; Khojainova, Natalia; Francavilla, John M; Gonzales, Gilbert R

    2002-02-01

    The technology of telemedicine has been used for over 20 years, involving different areas of medicine, providing medical care for the geographically isolated patients, and uniting geographically isolated clinicians. Today audio-visual technology may be useful in palliative care for the patients lacking access to medical services due to the medical condition rather than geographic isolation. We report results of a three-month trial of using audio-visual communications as a complementary tool in care for a complex palliative care patient. Benefits of this system to the patient included 1) a daily limited physical examination, 2) screening for a need for a clinical visit or admission, 3) lip reading by the deaf patient, 4) satisfaction by the patient and the caregivers with this form of communication as a complement to telephone communication. A brief overview of the historical prospective on telemedicine and a listing of applied telemedicine programs are provided.

  10. Twenty-Five Years of Dynamic Growth.

    ERIC Educational Resources Information Center

    Pipes, Lana

    1980-01-01

    Discusses developments in instructional technology in the past 25 years in the areas of audio, video, micro-electronics, social evolution, the space race, and living with rapidly changing technology. (CMV)

  11. Development of the ISS EMU Dashboard Software

    NASA Technical Reports Server (NTRS)

    Bernard, Craig; Hill, Terry R.

    2011-01-01

    The EMU (Extra-Vehicular Mobility Unit) Dashboard was developed at NASA s Johnson Space Center to aid in real-time mission support for the ISS (International Space Station) and Shuttle EMU space suit by time synchronizing down-linked video, space suit data and audio from the mission control audio loops. Once the input streams are synchronized and recorded, the data can be replayed almost instantly and has proven invaluable in understanding in-flight hardware anomalies and playing back information conveyed by the crew to missions control and the back room support. This paper will walk through the development from an engineer s idea brought to life by an intern to real time mission support and how this tool is evolving today and its challenges to support EVAs (Extra-Vehicular Activities) and human exploration in the 21st century.

  12. 15 CFR 1180.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... as that term is defined in Section 4 of the Stevenson-Wydler Technology Innovation Act of 1980, as..., software, audio/video production, technology application assessment generated pursuant to Section 11(c) of...

  13. Highlight summarization in golf videos using audio signals

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Kim, Jin Young

    2008-01-01

    In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.

  14. Using Text-to-Speech (TTS) for Audio Computer-Assisted Self-Interviewing (ACASI)

    ERIC Educational Resources Information Center

    Couper, Mick P.; Berglund, Patricia; Kirgis, Nicole; Buageila, Sarrah

    2016-01-01

    We evaluate the use of text-to-speech (TTS) technology for audio computer-assisted self-interviewing (ACASI). We use a quasi-experimental design, comparing the use of recorded human voice in the 2006-2010 National Survey of Family Growth with the use of TTS in the first year of the 2011-2013 survey, where the essential survey conditions are…

  15. Developing a Framework for Effective Audio Feedback: A Case Study

    ERIC Educational Resources Information Center

    Hennessy, Claire; Forrester, Gillian

    2014-01-01

    The increase in the use of technology-enhanced learning in higher education has included a growing interest in new approaches to enhance the quality of feedback given to students. Audio feedback is one method that has become more popular, yet evaluating its role in feedback delivery is still an emerging area for research. This paper is based on a…

  16. Communication Modes, Persuasiveness, and Decision-Making Quality: A Comparison of Audio Conferencing, Video Conferencing, and a Virtual Environment

    ERIC Educational Resources Information Center

    Lockwood, Nicholas S.

    2011-01-01

    Geographically dispersed teams rely on information and communication technologies (ICTs) to communicate and collaborate. Three ICTs that have received attention are audio conferencing (AC), video conferencing (VC), and, recently, 3D virtual environments (3D VEs). These ICTs offer modes of communication that differ primarily in the number and type…

  17. TRAINING TYPISTS IN THE INDUSTRIAL ENVIRONMENT--PRELIMINARY REPORT OF A PROTOTYPE SYSTEM OF SIMULTANEOUS, MULTILEVEL, MULTIPHASIC AUDIO PROGRAMMING.

    ERIC Educational Resources Information Center

    ADAMS, CHARLES F.

    IN 1965 TEN NEGRO AND PUERTO RICAN GIRLS BEGAN CLERICAL TRAINING IN THE NATIONAL ASSOCIATION OF MANUFACTURERS (NAM) TYPING LABORATORY I (TEELAB-I), A PILOT PROJECT TO DEVELOP A SYSTEM OF TRAINING TYPISTS WITHIN THE INDUSTRIAL ENVIRONMENT. THE INITIAL SYSTEM, AN ADAPTATION OF GREGG AUDIO MATERIALS TO A MACHINE TECHNOLOGY, TAUGHT ACCURACY, SPEED…

  18. Low Latency Audio Video: Potentials for Collaborative Music Making through Distance Learning

    ERIC Educational Resources Information Center

    Riley, Holly; MacLeod, Rebecca B.; Libera, Matthew

    2016-01-01

    The primary purpose of this study was to examine the potential of LOw LAtency (LOLA), a low latency audio visual technology designed to allow simultaneous music performance, as a distance learning tool for musical styles in which synchronous playing is an integral aspect of the learning process (e.g., jazz, folk styles). The secondary purpose was…

  19. Teaching 'How To' Technologies in Context.

    ERIC Educational Resources Information Center

    Leigh, Patricia Randolph

    The introductory instructional technology course at Iowa State University is a survey course covering various technologies. In this case, the instructor chose to create a situated learning environment using low-technology everyday surroundings to teach the fundamentals of photographic and video production, linking the photography, audio, and video…

  20. The use of ambient audio to increase safety and immersion in location-based games

    NASA Astrophysics Data System (ADS)

    Kurczak, John Jason

    The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a visual interface, while players performed signifi- cantly better at the game tasks when using the visual interface. This makes ambient audio a legitimate alternative to visual interfaces for mobile users when safety and immersion are a priority.

  1. Designing sound and visual components for enhancement of urban soundscapes.

    PubMed

    Hong, Joo Young; Jeon, Jin Yong

    2013-09-01

    The aim of this study is to investigate the effect of audio-visual components on environmental quality to improve soundscape. Natural sounds with road traffic noise and visual components in urban streets were evaluated through laboratory experiments. Waterfall and stream water sounds, as well as bird sounds, were selected to enhance the soundscape. Sixteen photomontages of a streetscape were constructed in combination with two types of water features and three types of vegetation which were chosen as positive visual components. The experiments consisted of audio-only, visual-only, and audio-visual conditions. The preferences and environmental qualities of the stimuli were evaluated by a numerical scale and 12 pairs of adjectives, respectively. The results showed that bird sounds were the most preferred among the natural sounds, while the sound of falling water was found to degrade the soundscape quality when the road traffic noise level was high. The visual effects of vegetation on aesthetic preference were significant, but those of water features relatively small. It was revealed that the perceptual dimensions of the environment were different from the noise levels. Particularly, the acoustic comfort factor related to soundscape quality considerably influenced preference for the overall environment at a higher level of road traffic noise.

  2. Factors Affecting Use of Telepresence Technology in a Global Technology Company

    ERIC Educational Resources Information Center

    Agnor, Robert Joseph

    2013-01-01

    Telepresence uses the latest video conferencing technology, with high definition video, surround sound audio, and specially constructed studios, to create a near face-to-face meeting experience. A Fortune 500 company which markets information technology has organizations distributed around the globe, and has extensive collaboration needs among…

  3. Identification and annotation of erotic film based on content analysis

    NASA Astrophysics Data System (ADS)

    Wang, Donghui; Zhu, Miaoliang; Yuan, Xin; Qian, Hui

    2005-02-01

    The paper brings forward a new method for identifying and annotating erotic films based on content analysis. First, the film is decomposed to video and audio stream. Then, the video stream is segmented into shots and key frames are extracted from each shot. We filter the shots that include potential erotic content by finding the nude human body in key frames. A Gaussian model in YCbCr color space for detecting skin region is presented. An external polygon that covered the skin regions is used for the approximation of the human body. Last, we give the degree of the nudity by calculating the ratio of skin area to whole body area with weighted parameters. The result of the experiment shows the effectiveness of our method.

  4. WORKSHOP ON MINING IMPACTED NATIVE AMERICAN LANDS CD

    EPA Science Inventory

    Multimedia Technology is an exciting mix of cutting-edge Information Technologies that utilize a variety of interactive structures, digital video and audio technologies, 3-D animation, high-end graphics, and peer-reviewed content that are then combined in a variety of user-friend...

  5. Wireless augmented reality communication system

    NASA Technical Reports Server (NTRS)

    Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)

    2006-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  6. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)

    2014-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  7. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)

    2016-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  8. Frequency shifting approach towards textual transcription of heartbeat sounds.

    PubMed

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  9. The NT digital micro tape recorder

    NASA Technical Reports Server (NTRS)

    Sasaki, Toshikazu; Alstad, John; Younker, Mike

    1993-01-01

    The description of an audio recorder may at first glance seem out of place in a conference which has been dedicated to the discussion of the technology and requirements of mass data storage. However, there are several advanced features of the NT system which will be of interest to the mass storage technologist. Moreover, there are a sufficient number of data storage formats in current use which have evolved from their audio counterparts to recommend a close attention to major innovative introductions of audio storage formats. While the existing analog micro-cassette recorder has been (and will continue to be) adequate for various uses, there are significant benefits to be gained through the application of digital technology. The elimination of background tape hiss and the availability of two relatively wide band channels (for stereo recording), for example, would greatly enhance listenability and speech intelligibility. And with the use of advanced high-density recording and LSI circuit technologies, a digital micro recorder can realize unprecedented compactness with excellent energy efficiency. This is what was accomplished with the NT-1 Digital Micro Recorder. Its remarkably compact size contributes to its portability. The high-density NT format enables up to two hours of low-noise digital stereo recording on a cassette the size of a postage stamp. Its highly energy-efficient mechanical and electrical design results in low power consumption; the unit can be operated up to 7 hours (for continuous recording) on a single AA alkaline battery. Advanced user conveniences include a multifunction LCD readout. The unit's compactness and energy-efficiency, in particular, are attributes that cannot be matched by existing analog and digital audio formats. The size, performance, and features of the NT format are of benefit primarily to those who desire improved portability and audio quality in a personal memo product. The NT Recorder is the result of over ten years of intensive, multi-disciplinary research and development. What follows is a discussion of the technologies that have made the NT possible: (1) NT format mechanics, (2) NT media, (3) NT circuitry and board.

  10. Report on Distance Learning Technologies.

    DTIC Science & Technology

    1995-09-01

    26 cities. The CSX system includes full-motion video, animations , audio, and interactive examples and testing to teach the use of a new computer...video. The change to all-digital media now permits the use of full-motion video, animation , and audio on networks. It is possible to have independent...is possible to download entire multimedia presentations from the network. To date there is not a great deal known about teaching courses using the

  11. An object-oriented, technology-adaptive information model

    NASA Technical Reports Server (NTRS)

    Anyiwo, Joshua C.

    1995-01-01

    The primary objective was to develop a computer information system for effectively presenting NASA's technologies to American industries, for appropriate commercialization. To this end a comprehensive information management model, applicable to a wide variety of situations, and immune to computer software/hardware technological gyrations, was developed. The model consists of four main elements: a DATA_STORE, a data PRODUCER/UPDATER_CLIENT and a data PRESENTATION_CLIENT, anchored to a central object-oriented SERVER engine. This server engine facilitates exchanges among the other model elements and safeguards the integrity of the DATA_STORE element. It is designed to support new technologies, as they become available, such as Object Linking and Embedding (OLE), on-demand audio-video data streaming with compression (such as is required for video conferencing), Worldwide Web (WWW) and other information services and browsing, fax-back data requests, presentation of information on CD-ROM, and regular in-house database management, regardless of the data model in place. The four components of this information model interact through a system of intelligent message agents which are customized to specific information exchange needs. This model is at the leading edge of modern information management models. It is independent of technological changes and can be implemented in a variety of ways to meet the specific needs of any communications situation. This summer a partial implementation of the model has been achieved. The structure of the DATA_STORE has been fully specified and successfully tested using Microsoft's FoxPro 2.6 database management system. Data PRODUCER/UPDATER and PRESENTATION architectures have been developed and also successfully implemented in FoxPro; and work has started on a full implementation of the SERVER engine. The model has also been successfully applied to a CD-ROM presentation of NASA's technologies in support of Langley Research Center's TAG efforts.

  12. Statistical data mining of streaming motion data for fall detection in assistive environments.

    PubMed

    Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P

    2011-01-01

    The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.

  13. Increasing Educational Efficiency Through Technology (Commission Discussion and Background Materials).

    ERIC Educational Resources Information Center

    Indiana State Commission for Higher Education, Indianapolis.

    A program schedule and background information for Indiana Commission for Higher Education-sponsored discussion of the use of educational technology to increase educational effeciency are presented. The four major topics of discussion to illustrate the uses and advantages/disadvantages of audio, video, and computing technologies are as follows:…

  14. A Novel Technology to Investigate Students' Understandings of Enzyme Representations

    ERIC Educational Resources Information Center

    Linenberger, Kimberly J.; Bretz, Stacey Lowery

    2012-01-01

    Digital pen-and-paper technology, although marketed commercially as a bridge between old and new note-taking capabilities, synchronizes the collection of both written and audio data. This manuscript describes how this technology was used to improve data collection in research regarding students' learning, specifically their understanding of…

  15. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Advances in Audio-Based Systems to Monitor Patient Adherence and Inhaler Drug Delivery.

    PubMed

    Taylor, Terence E; Zigel, Yaniv; De Looze, Céline; Sulaiman, Imran; Costello, Richard W; Reilly, Richard B

    2018-03-01

    Hundreds of millions of people worldwide have asthma and COPD. Current medications to control these chronic respiratory diseases can be administered using inhaler devices, such as the pressurized metered dose inhaler and the dry powder inhaler. Provided that they are used as prescribed, inhalers can improve patient clinical outcomes and quality of life. Poor patient inhaler adherence (both time of use and user technique) is, however, a major clinical concern and is associated with poor disease control, increased hospital admissions, and increased mortality rates, particularly in low- and middle-income countries. There are currently limited methods available to health-care professionals to objectively and remotely monitor patient inhaler adherence. This review describes recent sensor-based technologies that use audio-based approaches that show promising opportunities for monitoring inhaler adherence in clinical practice. This review discusses how one form of sensor-based technology, audio-based monitoring systems, can provide clinically pertinent information regarding patient inhaler use over the course of treatment. Audio-based monitoring can provide health-care professionals with quantitative measurements of the drug delivery of inhalers, signifying a clear clinical advantage over other methods of assessment. Furthermore, objective audio-based adherence measures can improve the predictability of patient outcomes to treatment compared with current standard methods of adherence assessment used in clinical practice. Objective feedback on patient inhaler adherence can be used to personalize treatment to the patient, which may enhance precision medicine in the treatment of chronic respiratory diseases. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  17. New Integrated Video and Graphics Technology: Digital Video Interactive.

    ERIC Educational Resources Information Center

    Optical Information Systems, 1987

    1987-01-01

    Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)

  18. Audio Haptic Videogaming for Developing Wayfinding Skills in Learners Who are Blind

    PubMed Central

    Sánchez, Jaime; de Borba Campos, Marcia; Espinoza, Matías; Merabet, Lotfi B.

    2014-01-01

    Interactive digital technologies are currently being developed as a novel tool for education and skill development. Audiopolis is an audio and haptic based videogame designed for developing orientation and mobility (O&M) skills in people who are blind. We have evaluated the cognitive impact of videogame play on O&M skills by assessing performance on a series of behavioral tasks carried out in both indoor and outdoor virtual spaces. Our results demonstrate that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged learners who are blind. The impact of audio and haptic information on learning is also discussed. PMID:25485312

  19. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  20. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    PubMed

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. An architecture of entropy decoder, inverse quantiser and predictor for multi-standard video decoding

    NASA Astrophysics Data System (ADS)

    Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun

    2014-07-01

    A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.

  2. Adult Literacy and Technology Conference Proceedings (University Park, Pennsylvania, June 4-7, 1987).

    ERIC Educational Resources Information Center

    Meenan, Avis L., Comp.; Burns, Patricia E., Comp.

    These proceedings contain the summaries of 60 presentations. Among those included are: "Desk Top Publishing & Experiential Literacy Material" (Arnold); "A Description of the U.S. Experience in Providing Vocational Skills to Individuals with Low Literacy Skills" (Barbee); "Audio-Disk Technology" (Bixler, MacClay); "Technology for Teachers: A Group…

  3. New Communications Technology and Distance Education: Implications for Commonwealth Countries of the South. Papers on Information Technology No. 239.

    ERIC Educational Resources Information Center

    Bates, A. W.

    This review of the technical possibilities of audio, television, computing, and combination media addresses the main factors influencing decisions about each technology's suitability for distance teaching, including access, costs, symbolic representation, student control, teacher control, existing structures, learning skills to be developed, and…

  4. Music Teacher Perceptions of a Model of Technology Training and Support in Virginia

    ERIC Educational Resources Information Center

    Welch, Lee Arthur

    2013-01-01

    A plethora of technology resources currently exists for the music classroom of the twenty-first century, including digital audio and video, music software, electronic instruments, Web 2.0 tools and more. Research shows a strong need for professional development for teachers to properly implement and integrate instructional technology resources…

  5. Space Operations Learning Center

    NASA Technical Reports Server (NTRS)

    Lui, Ben; Milner, Barbara; Binebrink, Dan; Kuok, Heng

    2012-01-01

    The Space Operations Learning Center (SOLC) is a tool that provides an online learning environment where students can learn science, technology, engineering, and mathematics (STEM) through a series of training modules. SOLC is also an effective media for NASA to showcase its contributions to the general public. SOLC is a Web-based environment with a learning platform for students to understand STEM through interactive modules in various engineering topics. SOLC is unique in its approach to develop learning materials to teach schoolaged students the basic concepts of space operations. SOLC utilizes the latest Web and software technologies to present this educational content in a fun and engaging way for all grade levels. SOLC uses animations, streaming video, cartoon characters, audio narration, interactive games and more to deliver educational concepts. The Web portal organizes all of these training modules in an easily accessible way for visitors worldwide. SOLC provides multiple training modules on various topics. At the time of this reporting, seven modules have been developed: Space Communication, Flight Dynamics, Information Processing, Mission Operations, Kids Zone 1, Kids Zone 2, and Save The Forest. For the first four modules, each contains three components: Flight Training, Flight License, and Fly It! Kids Zone 1 and 2 include a number of educational videos and games designed specifically for grades K-6. Save The Forest is a space operations mission with four simulations and activities to complete, optimized for new touch screen technology. The Kids Zone 1 module has recently been ported to Facebook to attract wider audience.

  6. Streaming Media Technology: Laying the Foundations for Educational Change.

    ERIC Educational Resources Information Center

    Sircar, Jayanta

    2000-01-01

    Discussion of the delivery of multimedia using streaming technology focuses on its use in engineering education. Highlights include engineering education and instructional technology, including learning approaches based on cognitive development; differences between local and distance education; economic factors; and roles of Web-based streaming,…

  7. Transformations: Technology and the Music Industry.

    ERIC Educational Resources Information Center

    Peters, G. David

    2001-01-01

    Focuses on the companies and organizations of the Music Industry Conference (MIC). Addresses topics such as: changes in companies due to technology, audio compact discs, the music instrument digital interface (MIDI) , digital sound recording, and the MIC on-line music instruction programs offered. (CMK)

  8. Emerging Organizational Electronic Communication Technologies: A Selected Review of the Literature.

    ERIC Educational Resources Information Center

    Hellweg, Susan A.; And Others

    A selective review of research dealing with emerging organizational electronic communication technologies from the communication, management, and organizational psychology literature was divided into four categories: word processing, electronic mail, computer conferencing, and teleconferencing (audio/video). The analysis was directed specifically…

  9. Optical Disk Technology.

    ERIC Educational Resources Information Center

    Abbott, George L.; And Others

    1987-01-01

    This special feature focuses on recent developments in optical disk technology. Nine articles discuss current trends, large scale image processing, data structures for optical disks, the use of computer simulators to create optical disks, videodisk use in training, interactive audio video systems, impacts on federal information policy, and…

  10. 76 FR 75522 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-02

    ... Recorded Interviewing (CARI) technology field test using the 2012 Survey of Income and Program Participation Event History Calendar (SIPP-EHC) Field Test questionnaire. Computer Audio Recorded Interviewing... the technology. Other tests have also been conducted on non-voluntary surveys and proved promising...

  11. Investigating health information needs of community radio stations and applying the World Wide Web to disseminate audio products.

    PubMed

    Snyders, Janus; van Wyk, Elmarie; van Zyl, Hendra

    2010-01-01

    The Web and Media Technologies Platform (WMTP) of the South African Medical Research Council (MRC) conducted a pilot project amongst community radio stations in South Africa. Based on previous research done in Africa WMTP investigated the following research question: How reliable is the content of health information broadcast by community radio stations? The main objectives of the project were to determine the 1) intervals of health slots on community radio stations, 2) sources used by community radio stations for health slots, 3) type of audio products needed for health slots, and 4) to develop a user friendly Web site in response to the stations' needs for easy access to audio material on health information.

  12. ESA personal communications and digital audio broadcasting systems based on non-geostationary satellites

    NASA Technical Reports Server (NTRS)

    Logalbo, P.; Benedicto, J.; Viola, R.

    1993-01-01

    Personal Communications and Digital Audio Broadcasting are two new services that the European Space Agency (ESA) is investigating for future European and Global Mobile Satellite systems. ESA is active in promoting these services in their various mission options including non-geostationary and geostationary satellite systems. A Medium Altitude Global Satellite System (MAGSS) for global personal communications at L and S-band, and a Multiregional Highly inclined Elliptical Orbit (M-HEO) system for multiregional digital audio broadcasting at L-band are described. Both systems are being investigated by ESA in the context of future programs, such as Archimedes, which are intended to demonstrate the new services and to develop the technology for future non-geostationary mobile communication and broadcasting satellites.

  13. Multimodal audio guide for museums and exhibitions

    NASA Astrophysics Data System (ADS)

    Gebbensleben, Sandra; Dittmann, Jana; Vielhauer, Claus

    2006-02-01

    In our paper we introduce a new Audio Guide concept for exploring buildings, realms and exhibitions. Actual proposed solutions work in most cases with pre-defined devices, which users have to buy or borrow. These systems often go along with complex technical installations and require a great degree of user training for device handling. Furthermore, the activation of audio commentary related to the exhibition objects is typically based on additional components like infrared, radio frequency or GPS technology. Beside the necessity of installation of specific devices for user location, these approaches often only support automatic activation with no or limited user interaction. Therefore, elaboration of alternative concepts appears worthwhile. Motivated by these aspects, we introduce a new concept based on usage of the visitor's own mobile smart phone. The advantages in our approach are twofold: firstly the Audio Guide can be used in various places without any purchase and extensive installation of additional components in or around the exhibition object. Secondly, the visitors can experience the exhibition on individual tours only by uploading the Audio Guide at a single point of entry, the Audio Guide Service Counter, and keeping it on her or his personal device. Furthermore, since the user usually is quite familiar with the interface of her or his phone and can thus interact with the application device easily. Our technical concept makes use of two general ideas for location detection and activation. Firstly, we suggest an enhanced interactive number based activation by exploiting the visual capabilities of modern smart phones and secondly we outline an active digital audio watermarking approach, where information about objects are transmitted via an analog audio channel.

  14. Digital Audio Radio Broadcast Systems Laboratory Testing Nearly Complete

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Radio history continues to be made at the NASA Lewis Research Center with the completion of phase one of the digital audio radio (DAR) testing conducted by the Consumer Electronics Group of the Electronic Industries Association. This satellite, satellite/terrestrial, and terrestrial digital technology will open up new audio broadcasting opportunities both domestically and worldwide. It will significantly improve the current quality of amplitude-modulated/frequency-modulated (AM/FM) radio with a new digitally modulated radio signal and will introduce true compact-disc-quality (CD-quality) sound for the first time. Lewis is hosting the laboratory testing of seven proposed digital audio radio systems and modes. Two of the proposed systems operate in two modes each, making a total of nine systems being tested. The nine systems are divided into the following types of transmission: in-band on-channel (IBOC), in-band adjacent-channel (IBAC), and new bands. The laboratory testing was conducted by the Consumer Electronics Group of the Electronic Industries Association. Subjective assessments of the audio recordings for each of the nine systems was conducted by the Communications Research Center in Ottawa, Canada, under contract to the Electronic Industries Association. The Communications Research Center has the only CCIR-qualified (Consultative Committee for International Radio) audio testing facility in North America. The main goals of the U.S. testing process are to (1) provide technical data to the Federal Communication Commission (FCC) so that it can establish a standard for digital audio receivers and transmitters and (2) provide the receiver and transmitter industries with the proper standards upon which to build their equipment. In addition, the data will be forwarded to the International Telecommunications Union to help in the establishment of international standards for digital audio receivers and transmitters, thus allowing U.S. manufacturers to compete in the world market.

  15. The Development of Pre-Service Science Teachers' Professional Knowledge in Utilizing ICT to Support Professional Lives

    ERIC Educational Resources Information Center

    Arnold, Savittree Rochanasmita; Padilla, Michael J.; Tunhikorn, Bupphachart

    2009-01-01

    In the rapidly developing digital world, technology is and will be a force in workplaces, communities, and everyday lives in the 21st century. Information and Communication Technology (ICT) including computer hardware/software, networking and other technologies such as audio, video, and other multimedia tools became learning tools for students in…

  16. A Recommendation for a New Internet-based Environment for Studying Literature

    ERIC Educational Resources Information Center

    Kartal, Erdogan; Arikan, Arda

    2010-01-01

    The effects of information and communication technologies, which are rapidly improving and spreading in the current age, can be seen in the field of training and education as well as in all other fields. Unlike previous technologies, the Internet, which is the concrete compound of those technologies, provides users with the trio of audio, text and…

  17. Let Them Have Their Cell Phone (And Let Them Read to It Too): Technology, Writing Instruction and Textual Obsolescence

    ERIC Educational Resources Information Center

    Shahar, Jed

    2012-01-01

    Cell phone ubiquity enables students to record and share audio file versions of their essays for proofreading purposes. Adopting this practice in community college developmental writing classes leads to an investigation of both writing as a technology and the influence of modern technology on composition and composition pedagogy.

  18. Problem Based Learning in Design and Technology Education Supported by Hypermedia-Based Environments

    ERIC Educational Resources Information Center

    Page, Tom; Lehtonen, Miika

    2006-01-01

    Audio-visual advances in virtual reality (VR) technology have given rise to innovative new ways to teach and learn. However, so far teaching and learning processes have been technologically driven as opposed to pedagogically led. This paper identifies the development of a pedagogical model and its application for teaching, studying and learning…

  19. Educational Applications of Podcasting in the Music Classroom

    ERIC Educational Resources Information Center

    Kerstetter, Kathleen

    2009-01-01

    For the music teacher, keeping up with technology can be a daunting task. One of the latest forms of technology, podcasting, has seen explosive growth in educational use over the last two years. Podcasting is a technology that allows listeners to subscribe, download, and listen to audio or audiovisual files at their convenience. Like a magazine…

  20. Using speech recognition to enhance the Tongue Drive System functionality in computer access.

    PubMed

    Huo, Xueliang; Ghovanloo, Maysam

    2011-01-01

    Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing.

  1. A hybrid technique for speech segregation and classification using a sophisticated deep neural network

    PubMed Central

    Nawaz, Tabassam; Mehmood, Zahid; Rashid, Muhammad; Habib, Hafiz Adnan

    2018-01-01

    Recent research on speech segregation and music fingerprinting has led to improvements in speech segregation and music identification algorithms. Speech and music segregation generally involves the identification of music followed by speech segregation. However, music segregation becomes a challenging task in the presence of noise. This paper proposes a novel method of speech segregation for unlabelled stationary noisy audio signals using the deep belief network (DBN) model. The proposed method successfully segregates a music signal from noisy audio streams. A recurrent neural network (RNN)-based hidden layer segregation model is applied to remove stationary noise. Dictionary-based fisher algorithms are employed for speech classification. The proposed method is tested on three datasets (TIMIT, MIR-1K, and MusicBrainz), and the results indicate the robustness of proposed method for speech segregation. The qualitative and quantitative analysis carried out on three datasets demonstrate the efficiency of the proposed method compared to the state-of-the-art speech segregation and classification-based methods. PMID:29558485

  2. Experiments in MPEG-4 content authoring, browsing, and streaming

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Schmidt, Robert L.; Basso, Andrea; Civanlar, Mehmet R.

    2000-12-01

    In this paper, within the context of the MPEG-4 standard we report on preliminary experiments in three areas -- authoring of MPEG-4 content, a player/browser for MPEG-4 content, and streaming of MPEG-4 content. MPEG-4 is a new standard for coding of audiovisual objects; the core of MPEG-4 standard is complete while amendments are in various stages of completion. MPEG-4 addresses compression of audio and visual objects, their integration by scene description, and interactivity of users with such objects. MPEG-4 scene description is based on VRML like language for 3D scenes, extended to 2D scenes, and supports integration of 2D and 3D scenes. This scene description language is called BIFS. First, we introduce the basic concepts behind BIFS and then show with an example, textual authoring of different components needed to describe an audiovisual scene in BIFS; the textual BIFS is then saved as compressed binary file/s for storage or transmission. Then, we discuss a high level design of an MPEG-4 player/browser that uses the main components from authoring such as encoded BIFS stream, media files it refers to, and multiplexed object descriptor stream to play an MPEG-4 scene. We also discuss our extensions to such a player/browser. Finally, we present our work in streaming of MPEG-4 -- the payload format, modification to client MPEG-4 player/browser, server-side infrastructure and example content used in our MPEG-4 streaming experiments.

  3. "Disruptive Technologies", "Pedagogical Innovation": What's New? Findings from an In-Depth Study of Students' Use and Perception of Technology

    ERIC Educational Resources Information Center

    Conole, Grainne; de Laat, Maarten; Dillon, Teresa; Darby, Jonathan

    2008-01-01

    The paper describes the findings from a study of students' use and experience of technologies. A series of in-depth case studies were carried out across four subject disciplines, with data collected via survey, audio logs and interviews. The findings suggest that students are immersed in a rich, technology-enhanced learning environment and that…

  4. The Effectiveness of Streaming Video on Medical Student Learning: A Case Study

    PubMed Central

    Bridge, Patrick D.; Jackson, Matt; Robinson, Leah

    2009-01-01

    Information technology helps meet today's medical students’ needs by providing multiple curriculum delivery methods. Video streaming is an e-learning technology that uses the Internet to deliver curriculum while giving the student control of the content's delivery. There have been few studies conducted on the effectiveness of streaming video in medical schools. A 5-year retrospective study was conducted using three groups of students (n = 1736) to determine if the availability of streaming video in Years 1–2 of the basic science curriculum affected overall Step 1 scores for first-time test-takers. The results demonstrated a positive effect on program outcomes as streaming video became more readily available to students. Based on these findings, streaming video technology seems to be a viable tool to complement in-class delivery methods, to accommodate the needs of medical students, and to provide options for meeting the challenges of delivering the undergraduate medical curriculum. Further studies need to be conducted to continue validating the effectiveness of streaming video technology. PMID:20165525

  5. The effectiveness of streaming video on medical student learning: a case study.

    PubMed

    Bridge, Patrick D; Jackson, Matt; Robinson, Leah

    2009-08-19

    Information technology helps meet today's medical students' needs by providing multiple curriculum delivery methods. Video streaming is an e-learning technology that uses the Internet to deliver curriculum while giving the student control of the content's delivery. There have been few studies conducted on the effectiveness of streaming video in medical schools. A 5-year retrospective study was conducted using three groups of students (n = 1736) to determine if the availability of streaming video in Years 1-2 of the basic science curriculum affected overall Step 1 scores for first-time test-takers. The results demonstrated a positive effect on program outcomes as streaming video became more readily available to students. Based on these findings, streaming video technology seems to be a viable tool to complement in-class delivery methods, to accommodate the needs of medical students, and to provide options for meeting the challenges of delivering the undergraduate medical curriculum. Further studies need to be conducted to continue validating the effectiveness of streaming video technology.

  6. A Planning and Development Proposal.

    ERIC Educational Resources Information Center

    Schachter, Rebeca

    In view of the rapidly changing hardware technology along with the quality and quantity of software and general attitudes toward educational technology, the configuration of the Audio-Visual Distribution System and the Science and Engineering Library (SEL) should be flexible enough to incorporate these variables. SEL has made significant thrusts…

  7. New Technologies, Same Ideologies: Learning from Language Revitalization Online

    ERIC Educational Resources Information Center

    Wagner, Irina

    2017-01-01

    Ease of access, production, and distribution have made online technologies popular in language revitalization. By incorporating multimodal resources, audio, video, and games, they attract indigenous communities undergoing language shift in hopes of its reversal. However, by merely expanding language revitalization to the web, many language…

  8. Recognition of Speech from the Television with Use of a Wireless Technology Designed for Cochlear Implants.

    PubMed

    Duke, Mila Morais; Wolfe, Jace; Schafer, Erin

    2016-05-01

    Cochlear implant (CI) recipients often experience difficulty understanding speech in noise and speech that originates from a distance. Many CI recipients also experience difficulty understanding speech originating from a television. Use of hearing assistance technology (HAT) may improve speech recognition in noise and for signals that originate from more than a few feet from the listener; however, there are no published studies evaluating the potential benefits of a wireless HAT designed to deliver audio signals from a television directly to a CI sound processor. The objective of this study was to compare speech recognition in quiet and in noise of CI recipients with the use of their CI alone and with the use of their CI and a wireless HAT (Cochlear Wireless TV Streamer). A two-way repeated measures design was used to evaluate performance differences obtained in quiet and in competing noise (65 dBA) with the CI sound processor alone and with the sound processor coupled to the Cochlear Wireless TV Streamer. Sixteen users of Cochlear Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. Participants were evaluated in four conditions including use of the sound processor alone and use of the sound processor with the wireless streamer in quiet and in the presence of competing noise at 65 dBA. Speech recognition was evaluated in each condition with two full lists of Computer-Assisted Speech Perception Testing and Training Sentence-Level Test sentences presented from a light-emitting diode television. Speech recognition in noise was significantly better with use of the wireless streamer compared to participants' performance with their CI sound processor alone. There was also a nonsignificant trend toward better performance in quiet with use of the TV Streamer. Performance was significantly poorer when evaluated in noise compared to performance in quiet when the TV Streamer was not used. Use of the Cochlear Wireless TV Streamer designed to stream audio from a television directly to a CI sound processor provides better speech recognition in quiet and in noise when compared to performance obtained with use of the CI sound processor alone. American Academy of Audiology.

  9. Satellite sound broadcasting system, portable reception

    NASA Technical Reports Server (NTRS)

    Golshan, Nasser; Vaisnys, Arvydas

    1990-01-01

    Studies are underway at JPL in the emerging area of Satellite Sound Broadcast Service (SSBS) for direct reception by low cost portable, semi portable, mobile and fixed radio receivers. This paper addresses the portable reception of digital broadcasting of monophonic audio with source material band limited to 5 KHz (source audio comparable to commercial AM broadcasting). The proposed system provides transmission robustness, uniformity of performance over the coverage area and excellent frequency reuse. Propagation problems associated with indoor portable reception are considered in detail and innovative antenna concepts are suggested to mitigate these problems. It is shown that, with the marriage of proper technologies a single medium power satellite can provide substantial direct satellite audio broadcast capability to CONUS in UHF or L Bands, for high quality portable indoor reception by low cost radio receivers.

  10. New radio meteor detecting and logging software

    NASA Astrophysics Data System (ADS)

    Kaufmann, Wolfgang

    2017-08-01

    A new piece of software ``Meteor Logger'' for the radio observation of meteors is described. It analyses an incoming audio stream in the frequency domain to detect a radio meteor signal on the basis of its signature, instead of applying an amplitude threshold. For that reason the distribution of the three frequencies with the highest spectral power are considered over the time (3f method). An auto notch algorithm is developed to prevent the radio meteor signal detection from being jammed by a present interference line. The results of an exemplary logging session are discussed.

  11. A Prospectus for the Future Development of a Speech Lab: Hypertext Applications.

    ERIC Educational Resources Information Center

    Berube, David M.

    This paper presents a plan for the next generation of speech laboratories which integrates technologies of modern communication in order to improve and modernize the instructional process. The paper first examines the application of intermediate technologies including audio-video recording and playback, computer assisted instruction and testing…

  12. On Basic Needs and Modest Media.

    ERIC Educational Resources Information Center

    Gunter, Jock

    1978-01-01

    The need for grass-roots participation and local control in whatever technology is used to meet basic educational needs is stressed. Successful uses of the audio cassette recorder and the portable half-inch video recorder are described; the 8-mm sound camera and video player are also suggested as viable "modest" technologies. (JEG)

  13. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  14. The Lived Experience of In-Service Teachers Using Synchronous Technology: A Phenomenological Study

    ERIC Educational Resources Information Center

    Vasquez, Sarah T.

    2017-01-01

    Unlike most online professional development opportunities, synchronous technology affords immediate communications for discussion and feedback while interacting with participants simultaneously through text, audio, video, and screen sharing. The purpose of this study is to find answers to meet the practical need to inform, design, and implement…

  15. Uses of Technology in Community Colleges: A Resource Book for Community College Teachers and Administrators.

    ERIC Educational Resources Information Center

    Gooler, Dennis D., Ed.

    This resource guide for community college teachers and administrators focuses on hardware and software. The following are discussed: (1) individual technologies--computer-assisted instruction, audio tape, films, filmstrips/slides, dial access, programmed instruction, learning activity packages, video cassettes, cable TV, independent learning labs,…

  16. The Mechanism for Organising and Propelling Educational Technology in China

    ERIC Educational Resources Information Center

    Yongqian, Liu; Dongyuan, Cheng; Xinli, Liu

    2010-01-01

    Having started early in the 1920s as a spontaneously launched educational activity by civil organisations under the influence of American audio-visual theory and practice, Chinese educational technology was later put under governmental management. This paper is composed of five parts covering mainly the historical development of educational…

  17. Cool Tools for the New Frontier: Technological Advances Help Associates Tell Their Story.

    ERIC Educational Resources Information Center

    Hersch, James

    1998-01-01

    Argues that creation of a World Wide Web site that makes good use of the available digital audio and visual technologies can be useful in campus activities planning and advertising. The design of a good Web site and the potential uses of digital video and compact discs are discussed. Costs of these technologies are also outlined. (MSE)

  18. Patients' use of digital audio recordings in four different outpatient clinics.

    PubMed

    Wolderslund, Maiken; Kofoed, Poul-Erik; Holst, René; Ammentorp, Jette

    2015-12-01

    To investigate a new technology of digital audio recording (DAR) of health consultations to provide knowledge about patients' use and evaluation of this recording method. A cross-sectional feasibility analysis of the intervention using log data from the recording platform and data from a patient-administered questionnaire. Four different outpatient clinics at a Danish hospital: Paediatrics, Orthopaedics, Internal Medicine and Urology. Two thousand seven hundred and eighty-four outpatients having their consultation audio recorded by one of 49 participating health professionals. DAR of outpatient consultations provided to patients permitting replay of their consultation either alone or together with their relatives. Replay of the consultation within 90 days from the consultation. In the adult outpatient clinics, one in every three consultations was replayed; however, the rates were significantly lower in the paediatric clinic where one in five consultations was replayed. The usage of the audio recordings was positively associated with increasing patient age and first time visits to the clinic. Patient gender influenced replays in different ways; for instance, relatives to male patients replayed recordings more often than relatives to female patients did. Approval of future recordings was high among the patients who replayed the consultation. Patients found that recording health consultations was an important information aid, and the digital recording technology was found to be feasible in routine practice. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  19. Live Educational Outreach for Ocean Exploration: High-Bandwidth Ship-to-Shore Broadcasts Using Internet2

    NASA Astrophysics Data System (ADS)

    Coleman, D. F.; Ballard, R. D.

    2005-12-01

    During the past 3 field seasons, our group at the University of Rhode Island Graduate School of Oceanography, in partnership with the Institute for Exploration and a number of educational institutions, has conducted a series of ocean exploration expeditions with a significant focus on educational outreach through "telepresence" - utilizing live transmissions of video, audio, and data streams across the Internet and Internet2. Our educational partners include Immersion Presents, Boys and Girls Clubs of America, the Jason Foundation for Education, and the National Geographic Society, all who provided partial funding for the expeditions. The primary funding agency each year was NOAA's Office of Ocean Exploration and our outreach efforts were conducted in collaboration with them. During each expedition, remotely operated vehicle (ROV) systems were employed to examine interesting geological and archaeological sites on the seafloor. These expeditions include the investigation of ancient shipwrecks in the Black Sea in 2003, a survey of the Titanic shipwreck site in 2004, and a detailed sampling and mapping effort at the Lost City Hydrothermal Field in 2005. High-definition video cameras on the ROVs collected the footage that was then digitally encoded, IP-encapsulated, and streamed across a satellite link to a shore-based hub, where the streams were redistributed. During each expedition, live half-hour-long educational broadcasts were produced 4 times per day for 10 days. These shows were distributed using satellite and internet technologies to a variety of venues, including museums, aquariums, science centers, public schools, and universities. In addition to the live broadcasts, educational products were developed to enhance the learning experience. These include activity modules and curriculum-based material for teachers and informal educators. Each educational partner also maintained a web site that followed the expedition and provided additional background information to supplement the live feeds. This program continues to grow and has proven very effective at distributing interesting scientific content to a wide range of audiences.

  20. A study of topics for distance education-A survey of U.S. Fish and Wildlife Service employees

    USGS Publications Warehouse

    Ratz, Joan M.; Schuster, Rudy M.; Marcy, Ann H.

    2011-01-01

    The purpose of this study was to identify training topics and distance education technologies preferred by U.S. Fish and Wildlife Service employees. This study was conducted on behalf of the National Conservation Training Center to support their distance education strategy planning and implementation. When selecting survey recipients, we focused on employees in positions involving conservation and environmental education and outreach programming. We conducted the study in two phases. First, we surveyed 72 employees to identify useful training topics. The response rate was 61 percent; respondents were from all regions and included supervisors and nonsupervisors. Five topics for training were identified: creating and maintaining partnerships (partnerships), technology, program planning and development (program planning), outreach methods to engage the community (outreach methods), and evaluation methods. In the second phase, we surveyed 1,488 employees to assess preferences for training among the five topics identified in the first survey and preferences among six distance education technologies: satellite television, video conferencing, audio conferencing, computer mediated training, written resources, and audio resources. Two types of instructor-led training were included on the survey to compare to the technology options. Respondents were asked what types of information, such as basic facts or problem solving skills, were needed for each of the five topics. The adjusted response rate was 64 percent; respondents were from all regions and included supervisors and nonsupervisors. The results indicated clear preferences among respondents for certain training topics and technologies. All five training topics were valued, but the topics of partnerships and technology were given equal value and were valued more than the other three topics. Respondents indicated a desire for training on the topics of partnerships, technology, program planning, and outreach methods. For the six distance education technologies, respondents indicated different levels of usability and access. Audio conferencing and written resources were reported to be most usable and accessible. The ratings of technology usability/access differed according to region; respondents in region 9 rated most technologies higher on usability/access. Respondents indicated they would take courses through either onsite or distance education approaches, but they prefer onsite training for most topics and most types of information.

  1. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  2. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  3. Video as a technology for interpersonal communications: a new perspective

    NASA Astrophysics Data System (ADS)

    Whittaker, Steve

    1995-03-01

    Some of the most challenging multimedia applications have involved real- time conferencing, using audio and video to support interpersonal communication. Here we re-examine assumptions about the role, importance and implementation of video information in such systems. Rather than focussing on novel technologies, we present evaluation data relevant to both the classes of real-time multimedia applications we should develop and their design and implementation. Evaluations of videoconferencing systems show that previous work has overestimated the importance of video at the expense of audio. This has strong implications for the implementation of bandwidth allocation and synchronization. Furthermore our recent studies of workplace interaction show that prior work has neglected another potentially vital function of visual information: in assessing the communication availability of others. In this new class of application, rather than providing a supplement to audio information, visual information is used to promote the opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we examine a different class of application 'video-as-data', where the video image is used to transmit information about the work objects themselves, rather than information about interactants.

  4. Status of Optical Disk Standards and Copy Protection Technology

    DTIC Science & Technology

    2000-01-01

    Technology (IT), the Consumer Electronics (CE) and the Content Providers such as the Motion Picture Association (MPA) and Secure Digital Music ...and Access Control. On audio recording, Secure Digital Music Initiative (SDMI) is leading the effort. 10 Besides these organizations, a world wide...coordinating orgainzation which ia working with the Information Technology Inductry Association (ITI), the Content Providers such as the Motion Picture

  5. Digital video technology, today and tomorrow

    NASA Astrophysics Data System (ADS)

    Liberman, J.

    1994-10-01

    Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.

  6. Home telecare system using cable television plants--an experimental field trial.

    PubMed

    Lee, R G; Chen, H S; Lin, C C; Chang, K C; Chen, J H

    2000-03-01

    To solve the inconvenience of routine transportation of chronically ill and handicapped patients, this paper proposes a platform based on a hybrid fiber coaxial (HFC) network in Taiwan designed to make a home telecare system feasible. The aim of this home telecare system is to combine biomedical data, including three-channel electrocardiogram (ECG) and blood pressure (BP), video, and audio into a National Television Standard Committee (NTSC) channel for communication between the patient and healthcare provider. Digitized biomedical data and output from medical devices can be further modulated to a second audio program (SAP) subchannel which can be used for second-language audio in NTSC television signals. For long-distance transmission, we translate the digital biomedical data into the frequency domain using frequency shift key (FSK) technology and insert this signal into an SAP band. The whole system has been implemented and tested. The results obtained using this system clearly demonstrated that real-time video, audio, and biomedical data transmission are very clear with a carrier-to-noise ratio up to 43 dB.

  7. Software-Based Scoring and Sound Design: An Introductory Guide for Music Technology Instruction

    ERIC Educational Resources Information Center

    Walzer, Daniel A.

    2016-01-01

    This article explores the creative function of virtual instruments, sequencers, loops, and software-based synthesizers to introduce basic scoring and sound design concepts for visual media in an introductory music technology course. Using digital audio workstations with user-focused and configurable options, novice composers can hone a broad range…

  8. Being There: The Case for Telepresence

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2010-01-01

    In this article, the author talks about telepresence, a combination of real-time video, audio, and interactive technologies that gives people in distributed locations a collaborative experience that's as close to being in the same room as current technology allows. In a culture that's still adjusting to iPhone-size screen displays and choppy cell…

  9. Teacher Perceptions of Technology Integration Professional Development an a 1:1 Chromebook Environment

    ERIC Educational Resources Information Center

    Yankelevich, Eleonora

    2017-01-01

    A variety of computing devices are available in today's classrooms, but they have not guaranteed the effective integration of technology. Nationally, teachers have ample devices, applications, productivity software, and digital audio and video tools. Despite all this, the literature suggests these tools are not employed to enhance student learning…

  10. Audiovisual Materials for the Engineering Technologies.

    ERIC Educational Resources Information Center

    O'Brien, Janet S., Comp.

    A list of audiovisual materials suitable for use in engineering technology courses is provided. This list includes titles of 16mm films, 8mm film loops, slidetapes, transparencies, audio tapes, and videotapes. Given for each title are: source, format, length of film or tape or number of slides or transparencies, whether color or black-and-white,…

  11. Student Hotline Procedural Manual. Instructional Technology and Design. Rio Salado Community College. Revised.

    ERIC Educational Resources Information Center

    Rio Salado Community Coll., AZ.

    Rio Salado Community College offers a variety of alternative delivery courses utilizing different forms of instructional technology (e.g., broadcast and cable television, radio, audio and video cassettes, and computer-managed instruction) for both credit and non-credit instruction. This manual provides information for student operators of a…

  12. The "Intelligent Classroom": Changing Teaching and Learning with an Evolving Technological Environment.

    ERIC Educational Resources Information Center

    Winer, Laura R.; Cooperstock, Jeremy

    2002-01-01

    Describes the development and use of the Intelligent Classroom collaborative project at McGill University that explored technology use to improve teaching and learning. Explains the hardware and software installation that allows for the automated capture of audio, video, slides, and handwritten annotations during a live lecture, with subsequent…

  13. The Impact of Modern Information and Communication Technologies on Social Movements

    ERIC Educational Resources Information Center

    Konieczny, Piotr

    2012-01-01

    Information and communication technologies (ICTs) have empowered non-state social actors, notably, social movements. They were quick to seize ICTs in the past (printing presses, television, fax machines), which was a major factor in their successes. Mass email campaigns, blogs, their audio- and video- variants (the podcasts and the videocasts),…

  14. Evaluation of a wireless audio streaming accessory to improve mobile telephone performance of cochlear implant users.

    PubMed

    Wolfe, Jace; Morais Duke, Mila; Schafer, Erin; Cire, George; Menapace, Christine; O'Neill, Lori

    2016-01-01

    The objective of this study was to evaluate the potential improvement in word recognition in quiet and in noise obtained with use of a Bluetooth-compatible wireless hearing assistance technology (HAT) relative to the acoustic mobile telephone condition (e.g. the mobile telephone receiver held to the microphone of the sound processor). A two-way repeated measures design was used to evaluate differences in telephone word recognition obtained in quiet and in competing noise in the acoustic mobile telephone condition compared to performance obtained with use of the CI sound processor and a telephone HAT. Sixteen adult users of Nucleus cochlear implants and the Nucleus 6 sound processor were included in this study. Word recognition over the mobile telephone in quiet and in noise was significantly better with use of the wireless HAT compared to performance in the acoustic mobile telephone condition. Word recognition over the mobile telephone was better in quiet when compared to performance in noise. The results of this study indicate that use of a wireless HAT improves word recognition over the mobile telephone in quiet and in noise relative to performance in the acoustic mobile telephone condition for a group of adult cochlear implant recipients.

  15. The Feasibility and Acceptability of Google Glass for Teletoxicology Consults.

    PubMed

    Chai, Peter R; Babu, Kavita M; Boyer, Edward W

    2015-09-01

    Teletoxicology offers the potential for toxicologists to assist in providing medical care at remote locations, via remote, interactive augmented audiovisual technology. This study examined the feasibility of using Google Glass, a head-mounted device that incorporates a webcam, viewing prism, and wireless connectivity, to assess the poisoned patient by a medical toxicology consult staff. Emergency medicine residents (resident toxicology consultants) rotating on the toxicology service wore Glass during bedside evaluation of poisoned patients; Glass transmitted real-time video of patients' physical examination findings to toxicology fellows and attendings (supervisory consultants), who reviewed these findings. We evaluated the usability (e.g., quality of connectivity and video feeds) of Glass by supervisory consultants, as well as attitudes towards use of Glass. Resident toxicology consultants and supervisory consultants completed 18 consults through Glass. Toxicologists viewing the video stream found the quality of audio and visual transmission usable in 89 % of cases. Toxicologists reported their management of the patient changed after viewing the patient through Glass in 56 % of cases. Based on findings obtained through Glass, toxicologists recommended specific antidotes in six cases. Head-mounted devices like Google Glass may be effective tools for real-time teletoxicology consultation.

  16. Using Speech Recognition to Enhance the Tongue Drive System Functionality in Computer Access

    PubMed Central

    Huo, Xueliang; Ghovanloo, Maysam

    2013-01-01

    Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing. PMID:22255801

  17. Telebation: next-generation telemedicine in remote airway management using current wireless technologies.

    PubMed

    Mosier, Jarrod; Joseph, Bellal; Sakles, John C

    2013-02-01

    Since the first remote intubation with telemedicine guidance, wireless technology has advanced to enable more portable methods of telemedicine involvement in remote airway management. Three voice over Internet protocol (VoIP) services were evaluated for quality of image transmitted, data lag, and audio quality with remotely observed and assisted intubations in an academic emergency department. The VoIP clients evaluated were Apple (Cupertino, CA) FaceTime(®), Skype™ (a division of Microsoft, Luxembourg City, Luxembourg), and Tango(®) (TangoMe, Palo Alto, CA). Each client was tested over a Wi-Fi network as well as cellular third generation (3G) (Skype and Tango). All three VoIP clients provided acceptable image and audio quality. There is a significant data lag in image transmission and quality when VoIP clients are used over cellular broadband (3G) compared with Wi-Fi. Portable remote telemedicine guidance is possible with newer technology devices such as a smartphone or tablet, as well as VoIP clients used over Wi-Fi or cellular broadband.

  18. Multimodal fusion of polynomial classifiers for automatic person recgonition

    NASA Astrophysics Data System (ADS)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.

  19. Use of a verbal electronic audio reminder with a patient hand hygiene bundle to increase independent patient hand hygiene practices of older adults in an acute care setting.

    PubMed

    Knighton, Shanina C; Dolansky, Mary; Donskey, Curtis; Warner, Camille; Rai, Herleen; Higgins, Patricia A

    2018-06-01

    We hypothesized that the addition of a novel verbal electronic audio reminder to an educational patient hand hygiene bundle would increase performance of self-managed patient hand hygiene. We conducted a 2-group comparative effectiveness study randomly assigning participants to patient hand hygiene bundle 1 (n = 41), which included a video, a handout, and a personalized verbal electronic audio reminder (EAR) that prompted hand cleansing at 3 meal times, or patient hand hygiene bundle 2 (n = 34), which included the identical video and handout, but not the EAR. The primary outcome was alcohol-based hand sanitizer use based on weighing bottles of hand sanitizer. Participants that received the EAR averaged significantly more use of hand sanitizer product over the 3 days of the study (mean ± SD, 29.97 ± 17.13 g) than participants with no EAR (mean ± SD, 10.88 ± 9.27 g; t 73  = 5.822; P ≤ .001). The addition of a novel verbal EAR to a patient hand hygiene bundle resulted in a significant increase in patient hand hygiene performance. Our results suggest that simple audio technology can be used to improve patient self-management of hand hygiene. Future research is needed to determine if the technology can be used to promote other healthy behaviors, reduce infections, and improve patient-centered care without increasing the workload of health care workers. Published by Elsevier Inc.

  20. Investigating the quality of video consultations performed using fourth generation (4G) mobile telecommunications.

    PubMed

    Caffery, Liam J; Smith, Anthony C

    2015-09-01

    The use of fourth-generation (4G) mobile telecommunications to provide real-time video consultations were investigated in this study with the aims of determining if 4G is a suitable telecommunications technology; and secondly, to identify if variation in perceived audio and video quality were due to underlying network performance. Three patient end-points that used 4G Internet connections were evaluated. Consulting clinicians recorded their perception of audio and video quality using the International Telecommunications Union scales during clinics with these patient end-points. These scores were used to calculate a mean opinion score (MOS). The network performance metrics were obtained for each session and the relationships between these metrics and the session's quality scores were tested. Clinicians scored the quality of 50 hours of video consultations, involving 36 clinic sessions. The MOS for audio was 4.1 ± 0.62 and the MOS for video was 4.4 ± 0.22. Image impairment and effort to listen were also rated favourably. There was no correlation between audio or video quality and the network metrics of packet loss or jitter. These findings suggest that 4G networks are an appropriate telecommunication technology to deliver real-time video consultations. Variations in quality scores observed during this study were not explained by the packet loss and jitter in the underlying network. Before establishing a telemedicine service, the performance of the 4G network should be assessed at the location of the proposed service. This is due to known variability in performance of 4G networks. © The Author(s) 2015.

  1. EMERGING TECHNOLOGY BULLETIN: VOLATILE ORGANIC COMPOUND REMOVAL FROM AIR STREAMS BY MEMBRANES SEPARATION MEMBRANE TECHNOLOGY AND RESEARCH, INC.

    EPA Science Inventory

    This membrane separation technology developed by Membrane Technology and Research (MTR), Incorporated, is designed to remove volatile organic compounds (VOCs) from contaminated air streams. In the process, organic vapor-laden air contacts one side of a membrane that is permeable ...

  2. Microcomputer Software Development: New Strategies for a New Technology.

    ERIC Educational Resources Information Center

    Kehrberg, Kent T.

    1979-01-01

    Provides a guide for the development of educational computer programs for use on microcomputers. Making use of the features of microcomputers, including visual, audio, and tactile techniques, is encouraged. (Author/IRT)

  3. Initial utilization of the CVIRB video production facility

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Hogge, Thomas W.

    1987-01-01

    Video disk technology is one of the central themes of a technology demonstrator workstation being assembled as a man/machine interface for the Space Station Data Management Test Bed at Johnson Space Center. Langley Research Center personnel involved in the conception and implementation of this workstation have assembled a video production facility to allow production of video disk material for this propose. This paper documents the initial familiarization efforts in the field of video production for those personnel and that facility. Although the entire video disk production cycle was not operational for this initial effort, the production of a simulated disk on video tape did acquaint the personnel with the processes involved and with the operation of the hardware. Invaluable experience in storyboarding, script writing, audio and video recording, and audio and video editing was gained in the production process.

  4. Exploring expressivity and emotion with artificial voice and speech technologies.

    PubMed

    Pauletto, Sandra; Balentine, Bruce; Pidcock, Chris; Jones, Kevin; Bottaci, Leonardo; Aretoulaki, Maria; Wells, Jez; Mundy, Darren P; Balentine, James

    2013-10-01

    Emotion in audio-voice signals, as synthesized by text-to-speech (TTS) technologies, was investigated to formulate a theory of expression for user interface design. Emotional parameters were specified with markup tags, and the resulting audio was further modulated with post-processing techniques. Software was then developed to link a selected TTS synthesizer with an automatic speech recognition (ASR) engine, producing a chatbot that could speak and listen. Using these two artificial voice subsystems, investigators explored both artistic and psychological implications of artificial speech emotion. Goals of the investigation were interdisciplinary, with interest in musical composition, augmentative and alternative communication (AAC), commercial voice announcement applications, human-computer interaction (HCI), and artificial intelligence (AI). The work-in-progress points towards an emerging interdisciplinary ontology for artificial voices. As one study output, HCI tools are proposed for future collaboration.

  5. What Do 2nd and 10th Graders Have in Common? Worms and Technology: Using Technology to Collaborate across Boundaries

    ERIC Educational Resources Information Center

    Culver, Patti; Culbert, Angie; McEntyre, Judy; Clifton, Patrick; Herring, Donna F.; Notar, Charles E.

    2009-01-01

    The article is about the collaboration between two classrooms that enabled a second grade class to participate in a high school biology class. Through the use of modern video conferencing equipment, Mrs. Culbert, with the help of the Dalton State College Educational Technology Training Center (ETTC), set up a live, two way video and audio feed of…

  6. Current Issues and Trends in Multidimensional Sensing Technologies for Digital Media

    NASA Astrophysics Data System (ADS)

    Nagata, Noriko; Ohki, Hidehiro; Kato, Kunihito; Koshimizu, Hiroyasu; Sagawa, Ryusuke; Fujiwara, Takayuki; Yamashita, Atsushi; Hashimoto, Manabu

    Multidimensional sensing (MDS) technologies have numerous applications in the field of digital media, including the development of audio and visual equipment for human-computer interaction (HCI) and manufacture of data storage devices; furthermore, MDS finds applications in the fields of medicine and marketing, i.e., in e-marketing and the development of diagnosis equipment.

  7. School Librarians as Technology Leaders: An Evolution in Practice

    ERIC Educational Resources Information Center

    Wine, Lois D.

    2016-01-01

    The role of school librarians has a history of radical change. School librarians adapted to take on responsibility for technology and audio-visual materials that were introduced in schools in earlier eras. With the advent of the Information Age in the middle of the 20th century and the subsequent development of personal computers and the Internet,…

  8. 1988-2000 Long-Range Plan for Technology of the Texas State Board of Education.

    ERIC Educational Resources Information Center

    Texas State Board of Education, Austin.

    This plan plots the course for meeting educational needs in Texas through such technologies as computer-based systems, devices for storage and retrieval of massive amounts of information, telecommunications for audio, video, and information sharing, and other electronic media devised by the year 2000 that can help meet the instructional and…

  9. Podcasts in Education: Let Their Voices Be Heard

    ERIC Educational Resources Information Center

    Sprague, Debra; Pixley, Cynthia

    2008-01-01

    One technology made possible through Web 2.0 is podcasting. Podcasts are audio, video, text, and other media files that can be played on the computer or downloaded to MP3 players. This article discusses how to create a podcast and ways to use this technology in education. Benefits and issues related to podcasting are also provided.

  10. Electronic (fenceless) control of livestock.

    Treesearch

    A.R. Tiedemann; T.M. Quigley; L.D. White; et al.

    1999-01-01

    During June and August 1992, a new technology designed to exclude cattle from specific areas such as riparian zones was tested. The technology consisted of an eartag worn by an animal that provides an audio warning and electrical impulse to the ear as the animal approaches the zone of influence of a transmitter. The transmitter emits a signal that narrowly defines the...

  11. Hypermedia: The Integrated Learning Environment. Fastback 339.

    ERIC Educational Resources Information Center

    Wishnietsky, Dan H.

    Hypermedia is not a single technology; it is an integrated electronic environment that combines text, audio, and video into one large file. Users can explore information about a subject using several technologies at the same time. Although the technical foundation for hypermedia was established in the early 1970s, it was not until the late 1980s…

  12. I Upload Audio, therefore I Teach

    ERIC Educational Resources Information Center

    Fernandez, Luke

    2007-01-01

    Recording lectures and making it available as MP3's might seem counterintuitive for a course that denies students the use of paper and pencil. The author speculated that online technology might help students get away from writing and allow them to think and learn in new (or perhaps older) ways. As with any other technological invention, it is…

  13. Access to Technology and Readiness to Use It in Learning

    ERIC Educational Resources Information Center

    Kabonoki, S. K.

    2008-01-01

    This case study involved 429 distance education diploma students at the University of Botswana. The aim of the study was to find out whether these students had access to MP3 players and other technologies essential in distance learning. Findings show that, contrary to expectations, learners did not have access to MP3 digital audio devices.…

  14. Beyond "Classroom" Technology: The Equipment Circulation Program at Rasmuson Library, University of Alaska Fairbanks

    ERIC Educational Resources Information Center

    Jensen, Karen

    2008-01-01

    The library at the University of Alaska Fairbanks offers a unique equipment lending program through its Circulation Desk. The program features a wide array of equipment types, generous circulation policies, and unrestricted borrowing, enabling students, staff, and faculty to experiment with the latest in audio, video, and computer technologies,…

  15. Information and Communication Technology in the Classroom: An Empirical Study with an International Perspective.

    ERIC Educational Resources Information Center

    Mueller, Carolyn B.; Jones, Gordon; Ricks, David A.; Schlegelmilch, Bodo B.; Van Deusen, Cheryl A.

    2001-01-01

    Surveyed international business faculty in 14 countries about their perceptions and use of information and communication technology (ICT) in the classroom. Faculty believe the primary advantages of ICT are that they provide positive impact on visual as well as audio learners, and promote greater understanding, excitement, and student interest.…

  16. Strike up Student Interest through Song: Technology and Westward Expansion

    ERIC Educational Resources Information Center

    Steele, Meg

    2014-01-01

    Sheet music, song lyrics, and audio recordings may not be the first primary sources that come to mind when considering ways to teach about changes brought about by technology during westward expansion, but these sources engage students in thought provoking ways. In this article the author presents a 1917 photograph of Mountain Chief, of the Piegan…

  17. Linguistic Layering: Social Language Development in the Context of Multimodal Design and Digital Technologies

    ERIC Educational Resources Information Center

    Domingo, Myrrh

    2012-01-01

    In our contemporary society, digital texts circulate more readily and extend beyond page-bound formats to include interactive representations such as online newsprint with hyperlinks to audio and video files. This is to say that multimodality combined with digital technologies extends grammar to include voice, visual, and music, among other modes…

  18. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  19. Simulation of prenatal maternal sounds in NICU incubators: a pilot safety and feasibility study.

    PubMed

    Panagiotidis, John; Lahav, Amir

    2010-10-01

    This pilot study evaluated the safety and feasibility of an innovative audio system for transmitting maternal sounds to NICU incubators. A sample of biological sounds, consisting of voice and heartbeat, were recorded from a mother of a premature infant admitted to our unit. The maternal sounds were then played back inside an unoccupied incubator via a specialized audio system originated and compiled in our lab. We performed a series of evaluations to determine the safety and feasibility of using this system in NICU incubators. The proposed audio system was found to be safe and feasible, meeting criteria for humidity and temperature resistance, as well as for safe noise levels. Simulation of maternal sounds using this system seems achievable and applicable and received local support from medical staff. Further research and technology developments are needed to optimize the design of the NICU incubators to preserve the acoustic environment of the womb.

  20. Clinical experience with real-time ultrasound

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Wolfman, Neil T.; Covitz, Wesley

    1995-05-01

    After testing the extended multimedia interface (EMMI) product which is an asynchronous transmission mode (ATM) user to network interface (UNI) of AT&T at the Society for Computer Applications in Radiology conference in Winston-Salem, the Department of Radiology together with AT&T are implementing a tele-ultrasound system to combine real- time ultrasound with the static imaging features of more traditional digital ultrasound systems. Our current ultrasound system archives digital images to an optical disk system. Static images are sent using our digital radiology systems. This could be transferring images from one digital imaging and communications (DICOM)-compliant machine to another, or the current image transfer methodologies. The prototype of a live ultrasound system using the EMMI demonstrated the feasibility of doing live ultrasound. We now are developing the scenarios using a mix of the two methodologies. Utilizing EMMI technology, radiologists at the BGSM review at a workstation both static images and real-time scanning done by a technologist on patients at a remote site in order to render on-line primary diagnosis. Our goal is to test the feasibility of operating an ultrasound laboratory at a remote site utilizing a trained technologist without the necessity of having a full-time radiologist at that site. Initial plans are for a radiologist to review an initial set of static images on a patient taken by the technologist. If further scanning is required, the EMMI is used to transmit real-time imaging and audio using the audio input of a standard microphone system and the National Television Standards Committee (NTSC) output of the ultrasound equipment from the remote site to the radiologist in the department review station. The EMMI digitally encodes this data and places it in an ATM format. This ATM data stream goes to the GCNS2000 and then to the other EMMI where the ATM data stream is decoded into the live studies and voice communication which are then received on a television and audio monitor. We also test live transmission of pediatric echocardiograms using the EMMI from a remote hospital to the Bowman Gray School of Medicine (BGSM) via a GCNS2000 ATM switch. This replaces the current method of having these studies transferred to a VHS tape and then mailed overnight to our pediatric cardiologist for review. This test should provide valuable insight into the staffing and operational requirements of a tele-ultrasound unit with pediatric echocardiogram capabilities. The EMMI thus provides a means for the radiologist to be in constant communication with the technologist to guide the scanning of areas in question and enable general problem solving. Live scans are sent from one EMMI at the remote site to the other EMMI at the review station in the radiology department via the GCNS2000 switch. This arrangement allows us to test the use of public ATM services for this application as this switch is a wide area, central office ATM switch. Static images are sent using the DICOM standard when available, otherwise the established institutional digital radiology methods are used.

  1. Experimental Validation: Subscale Aircraft Ground Facilities and Integrated Test Capability

    NASA Technical Reports Server (NTRS)

    Bailey, Roger M.; Hostetler, Robert W., Jr.; Barnes, Kevin N.; Belcastro, Celeste M.; Belcastro, Christine M.

    2005-01-01

    Experimental testing is an important aspect of validating complex integrated safety critical aircraft technologies. The Airborne Subscale Transport Aircraft Research (AirSTAR) Testbed is being developed at NASA Langley to validate technologies under conditions that cannot be flight validated with full-scale vehicles. The AirSTAR capability comprises a series of flying sub-scale models, associated ground-support equipment, and a base research station at NASA Langley. The subscale model capability utilizes a generic 5.5% scaled transport class vehicle known as the Generic Transport Model (GTM). The AirSTAR Ground Facilities encompass the hardware and software infrastructure necessary to provide comprehensive support services for the GTM testbed. The ground facilities support remote piloting of the GTM aircraft, and include all subsystems required for data/video telemetry, experimental flight control algorithm implementation and evaluation, GTM simulation, data recording/archiving, and audio communications. The ground facilities include a self-contained, motorized vehicle serving as a mobile research command/operations center, capable of deployment to remote sites when conducting GTM flight experiments. The ground facilities also include a laboratory based at NASA LaRC providing near identical capabilities as the mobile command/operations center, as well as the capability to receive data/video/audio from, and send data/audio to the mobile command/operations center during GTM flight experiments.

  2. Progress In Optical Memory Technology

    NASA Astrophysics Data System (ADS)

    Tsunoda, Yoshito

    1987-01-01

    More than 20 years have passed since the concept of optical memory was first proposed in 1966. Since then considerable progress has been made in this area together with the creation of completely new markets of optical memory in consumer and computer application areas. The first generation of optical memory was mainly developed with holographic recording technology in late 1960s and early 1970s. Considerable number of developments have been done in both analog and digital memory applications. Unfortunately, these technologies did not meet a chance to be a commercial product. The second generation of optical memory started at the beginning of 1970s with bit by bit recording technology. Read-only type optical memories such as video disks and compact audio disks have extensively investigated. Since laser diodes were first applied to optical video disk read out in 1976, there have been extensive developments of laser diode pick-ups for optical disk memory systems. The third generation of optical memory started in 1978 with bit by bit read/write technology using laser diodes. Developments of recording materials including both write-once and erasable have been actively pursued at several research institutes. These technologies are mainly focused on the optical memory systems for computer application. Such practical applications of optical memory technology has resulted in the creation of such new products as compact audio disks and computer file memories.

  3. Effects of Video Streaming Technology on Public Speaking Students' Communication Apprehension and Competence

    ERIC Educational Resources Information Center

    Dupagne, Michel; Stacks, Don W.; Giroux, Valerie Manno

    2007-01-01

    This study examines whether video streaming can reduce trait and state communication apprehension, as well as improve communication competence, in public speaking classes. Video streaming technology has been touted as the next generation of video feedback for public speaking students because it is not limited by time or space and allows Internet…

  4. NFL Films audio, video, and film production facilities

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.

  5. Video Pedagogy as Political Activity.

    ERIC Educational Resources Information Center

    Higgins, John W.

    1991-01-01

    Asserts that the education of students in the technology of video and audio production is a political act. Discusses the structure and style of production, and the ideologies and values contained therein. Offers alternative approaches to critical video pedagogy. (PRA)

  6. Validation protocol for digital audio recorders used in aircraft-noise-certification testing

    DOT National Transportation Integrated Search

    2010-11-01

    The U.S. Department of Transportation, Research and Innovative Technology Administra-tion, John A. Volpe National Transportation Systems Center, Environmental Measurement and Modeling Division (Volpe), is supporting the aircraft noise certification i...

  7. Multimedia Instruction Puts Teachers in the Director's Chair.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1990-01-01

    Teachers can produce and direct their own instructional videos using computer-driven multimedia. Outlines the basics in combining audio and video technologies to produce videotapes that mix animated and still graphics, sound, and full-motion video. (MLF)

  8. Recording vocalizations with Bluetooth technology.

    PubMed

    Gaona-González, Andrés; Santillán-Doherty, Ana María; Arenas-Rosas, Rita Virginia; Muñoz-Delgado, Jairo; Aguillón-Pantaleón, Miguel Angel; Ordoñez-Gómez, José Domingo; Márquez-Arias, Alejandra

    2011-06-01

    We propose a method for capturing vocalizations that is designed to avoid some of the limiting factors found in traditional bioacoustical methods, such as the impossibility of obtaining continuous long-term registers or analyzing amplitude due to the continuous change of distance between the subject and the position of the recording system. Using Bluetooth technology, vocalizations are captured and transmitted wirelessly into a receiving system without affecting the quality of the signal. The recordings of the proposed system were compared to those obtained as a reference, which were based on the coding of the signal with the so-called pulse-code modulation technique in WAV audio format without any compressing process. The evaluation showed p < .05 for the measured quantitative and qualitative parameters. We also describe how the transmitting system is encapsulated and fixed on the animal and a way to video record a spider monkey's behavior simultaneously with the audio recordings.

  9. European Union RACE program contributions to digital audiovisual communications and services

    NASA Astrophysics Data System (ADS)

    de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric

    1995-02-01

    The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.

  10. Adapting the Speed of Reproduction of Audio Content and Using Text Reinforcement for Maximizing the Learning Outcome though Mobile Phones

    ERIC Educational Resources Information Center

    Munoz-Organero, M.; Munoz-Merino, P. J.; Kloos, Carlos Delgado

    2011-01-01

    The use of technology in learning environments should be targeted at improving the learning outcome of the process. Several technology enhanced techniques can be used for maximizing the learning gain of particular students when having access to learning resources. One of them is content adaptation. Adapting content is especially important when…

  11. Verbal Immediacy and Audio/Video Technology Use in Online Course Delivery: What Do University Agricultural Education Students Think?

    ERIC Educational Resources Information Center

    Murphrey, Theresa Pesl; Arnold, Shannon; Foster, Billye; Degenhart, Shannon H.

    2012-01-01

    As demand for online course delivery increases, it is imperative that those courses be delivered in an effective and efficient manner. While technologies are offering increasingly new and innovative tools to deliver courses, it is not known which of these tools are perceived as useful and beneficial by university agricultural education students.…

  12. The Impact of Multimedia Feedback on Student Perceptions: Video Screencast with Audio Compared to Text Based eMail

    ERIC Educational Resources Information Center

    Perkoski, Robert R.

    2017-01-01

    Computer technology provides a plethora of tools to engage students and make the classroom more interesting. Much research has been conducted on the impact of educational technology regarding instruction but little has been done on students' preferences for the type of instructor feedback (Watts, 2007). Mayer (2005) has developed an integrative,…

  13. Effectiveness of Teaching Café Waitering to Adults with Intellectual Disability through Audio-Visual Technologies

    ERIC Educational Resources Information Center

    Cavkaytar, Atilla; Acungil, Ahmet Turan; Tomris, Gözde

    2017-01-01

    Learning vocational skills and employment are a priority, for adults with intellectual disability (AID) in terms of living independently. Use of technologies for the education of AID is one of the primary goals of World Health Organization. The aim of this research was to determine the effectiveness of teaching café waitering to adults with…

  14. Evaluation of MRI acquisition workflow with lean six sigma method: case study of liver and knee examinations.

    PubMed

    Roth, Christopher J; Boll, Daniel T; Wall, Lisa K; Merkle, Elmar M

    2010-08-01

    The purpose of this investigation was to assess workflow for medical imaging studies, specifically comparing liver and knee MRI examinations by use of the Lean Six Sigma methodologic framework. The hypothesis tested was that the Lean Six Sigma framework can be used to quantify MRI workflow and to identify sources of inefficiency to target for sequence and protocol improvement. Audio-video interleave streams representing individual acquisitions were obtained with graphic user interface screen capture software in the examinations of 10 outpatients undergoing MRI of the liver and 10 outpatients undergoing MRI of the knee. With Lean Six Sigma methods, the audio-video streams were dissected into value-added time (true image data acquisition periods), business value-added time (time spent that provides no direct patient benefit but is requisite in the current system), and non-value-added time (scanner inactivity while awaiting manual input). For overall MRI table time, value-added time was 43.5% (range, 39.7-48.3%) of the time for liver examinations and 89.9% (range, 87.4-93.6%) for knee examinations. Business value-added time was 16.3% of the table time for the liver and 4.3% of the table time for the knee examinations. Non-value-added time was 40.2% of the overall table time for the liver and 5.8% for the knee examinations. Liver MRI examinations consume statistically significantly more non-value-added and business value-added times than do knee examinations, primarily because of respiratory command management and contrast administration. Workflow analyses and accepted inefficiency reduction frameworks can be applied with use of a graphic user interface screen capture program.

  15. Fiber-channel audio video standard for military and commercial aircraft product lines

    NASA Astrophysics Data System (ADS)

    Keller, Jack E.

    2002-08-01

    Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.

  16. Delivering Unidata Technology via the Cloud

    NASA Astrophysics Data System (ADS)

    Fisher, Ward; Oxelson Ganter, Jennifer

    2016-04-01

    Over the last two years, Docker has emerged as the clear leader in open-source containerization. Containerization technology provides a means by which software can be pre-configured and packaged into a single unit, i.e. a container. This container can then be easily deployed either on local or remote systems. Containerization is particularly advantageous when moving software into the cloud, as it simplifies the process. Unidata is adopting containerization as part of our commitment to migrate our technologies to the cloud. We are using a two-pronged approach in this endeavor. In addition to migrating our data-portal services to a cloud environment, we are also exploring new and novel ways to use cloud-specific technology to serve our community. This effort has resulted in several new cloud/Docker-specific projects at Unidata: "CloudStream," "CloudIDV," and "CloudControl." CloudStream is a docker-based technology stack for bringing legacy desktop software to new computing environments, without the need to invest significant engineering/development resources. CloudStream helps make it easier to run existing software in a cloud environment via a technology called "Application Streaming." CloudIDV is a CloudStream-based implementation of the Unidata Integrated Data Viewer (IDV). CloudIDV serves as a practical example of application streaming, and demonstrates how traditional software can be easily accessed and controlled via a web browser. Finally, CloudControl is a web-based dashboard which provides administrative controls for running docker-based technologies in the cloud, as well as providing user management. In this work we will give an overview of these three open-source technologies and the value they offer to our community.

  17. Algorithms for highway-speed acoustic impact-echo evaluation of concrete bridge decks

    NASA Astrophysics Data System (ADS)

    Mazzeo, Brian A.; Guthrie, W. Spencer

    2018-04-01

    A new acoustic impact-echo testing device has been developed for detecting and mapping delaminations in concrete bridge decks at highway speeds. The apparatus produces nearly continuous acoustic excitation of concrete bridge decks through rolling mats of chains that are placed around six wheels mounted to a hinged trailer. The wheels approximately span the width of a traffic lane, and the ability to remotely lower and raise the apparatus using a winch system allows continuous data collection without stationary traffic control or exposure of personnel to traffic. Microphones near the wheels are used to record the acoustic response of the bridge deck during testing. In conjunction with the development of this new apparatus, advances in the algorithms required for data analysis were needed. This paper describes the general framework of the algorithms developed for converting differential global positioning system data and multi-channel audio data into maps that can be used in support of engineering decisions about bridge deck maintenance, rehabilitation, and replacement (MR&R). Acquisition of position and audio data is coordinated on a laptop computer through a custom graphical user interface. All of the streams of data are synchronized with the universal computer time so that audio data can be associated with interpolated position information through data post-processing. The audio segments are individually processed according to particular detection algorithms that can adapt to variations in microphone sensitivity or particular chain excitations. Features that are greater than a predetermined threshold, which is held constant throughout the analysis, are then subjected to further analysis and included in a map that shows the results of the testing. Maps of data collected on a bridge deck using the new acoustic impact-echo testing device at different speeds ranging from approximately 10 km/h to 55 km/h indicate that the collected data are reasonably repeatable. Use of the new acoustic impact-echo testing device is expected to enable more informed decisions about MR&R of concrete bridge decks.

  18. Harvesting data from advanced technologies.

    DOT National Transportation Integrated Search

    2014-11-01

    Data streams are emerging everywhere such as Web logs, Web page click streams, sensor data streams, and credit card transaction flows. : Different from traditional data sets, data streams are sequentially generated and arrive one by one rather than b...

  19. Audio-Conferencing and Social Work Education in Alaska: New Technology and Challenges.

    ERIC Educational Resources Information Center

    Kleinkauf, Cecilia; Robinson, Myrna

    1987-01-01

    Reports on the use of teleconferencing to deliver undergraduate courses to geographically remote social work students of the University of Alaska, Anchorage. Describes telecommunications system, courses, teaching methods, and measures of educational effectiveness. (LFL)

  20. Gedanken Experiments in Educational Cost Effectiveness

    ERIC Educational Resources Information Center

    Brudner, Harvey J.

    1978-01-01

    Discusses the effectiveness of cost determining techniques in education. The areas discussed are: education and management; cost-effectiveness models; figures of merit determination; and the implications as they relate to the areas of audio-visual and computer educational technology. (Author/GA)

  1. Weaving together peer assessment, audios and medical vignettes in teaching medical terms.

    PubMed

    Allibaih, Mohammad; Khan, Lateef M

    2015-12-06

    The current study aims at exploring the possibility of aligning peer assessment, audiovisuals, and medical case-report extracts (vignettes) in medical terminology teaching. In addition, the study wishes to highlight the effectiveness of audio materials and medical history vignettes in preventing medical students' comprehension, listening, writing, and pronunciation errors. The study also aims at reflecting the medical students' attitudes towards the teaching and learning process. The study involved 161 medical students who received an intensive medical terminology course through audio and medical history extracts. Peer assessment and formative assessment platforms were applied through fake quizzes in a pre- and post-test manner. An 18-item survey was distributed amongst students to investigate their attitudes and feedback towards the teaching and learning process. Quantitative and qualitative data were analysed using the SPSS software. The students did better in the posttests than on the pretests for both the quizzes of audios and medical vignettes showing a t-test of -12.09 and -13.60 respectively. Moreover, out of the 133 students, 120 students (90.22%) responded to the survey questions. The students gave positive attitudes towards the application of audios and vignettes in the teaching and learning of medical terminology and towards the learning process. The current study revealed that the teaching and learning of medical terminology have more room for the application of advanced technologies, effective assessment platforms, and active learning strategies in higher education. It also highlights that students are capable of carrying more responsibilities of assessment, feedback, and e-learning.

  2. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    NASA Astrophysics Data System (ADS)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  3. Potential Applicability of Assembled Chemical Weapons Assessment Technologies to RCRA Waste Streams and Contaminated Media (PDF)

    EPA Pesticide Factsheets

    This report provides an evaluation of the potential applicability of Assembled Chemical Weapons Assessment (ACWA) technologies to RCRA waste streams and contaminated media found at RCRA and Superfund sites.

  4. Researching and Evaluating Digital Storytelling as a Distance Education Tool in Physics Instruction: An Application with Pre-Service Physics Teachers

    ERIC Educational Resources Information Center

    Kotluk, Nihat; Kocakaya, Serhat

    2016-01-01

    Advances in information and communication technology in 21st century have led to changes in education trends and today new concepts such as computer, multimedia, audio, video, animation and internet have become an indispensable part of life. The storytelling is the one of approach which is allowed to using technology in educational field. The aim…

  5. An Investigation of How the Channel of Input and Access to Test Questions Affect L2 Listening Test Performance

    ERIC Educational Resources Information Center

    Wagner, Elvis

    2013-01-01

    The use of video technology has become widespread in the teaching and testing of second-language (L2) listening, yet research into how this technology affects the learning and testing process has lagged. The current study investigated how the channel of input (audiovisual vs. audio-only) used on an L2 listening test affected test-taker…

  6. Information Technology for Agricultural America. Prepared for the Subcommittee on Department Operations, Research and Foreign Agriculture, 97th Congress, 2d Session. Committee Print.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. Congressional Research Service.

    This summary of the combined Hearing and Workshop on Applications of Computer-Based Information Systems and Services in Agriculture (May 19-20, 1982) offers an overview of the ways in which information technology--computers, telecommunications, microforms, word processing, video and audio devices--may be utilized by American farmers and ranchers.…

  7. MLC Libraries--A School Library's Journey with Students, Staff and Web 2.0 Technologies: Blogs, Wikis and E-Books--Where Are We Going Next?

    ERIC Educational Resources Information Center

    Viner, Jane; Lucas, Amanda; Ricchini, Tracey; Ri, Regina

    2010-01-01

    This workshop paper explores the Web 2.0 journey of the MLC Libraries' teacher-librarians, librarian, library and audio visual technicians. Our journey was initially inspired by Will Richardson and supported by the School Library Association of Victoria (SLAV) Web 2.0 professional development program. The 12 week technological skills program…

  8. Rapid Development of Orion Structural Test Systems

    NASA Astrophysics Data System (ADS)

    Baker, Dave

    2012-07-01

    NASA is currently validating the Orion spacecraft design for human space flight. Three systems developed by G Systems using hardware and software from National Instruments play an important role in the testing of the new Multi- purpose crew vehicle (MPCV). A custom pressurization and venting system enables engineers to apply pressure inside the test article for measuring strain. A custom data acquisition system synchronizes over 1,800 channels of analog data. This data, along with multiple video and audio streams and calculated data, can be viewed, saved, and replayed in real-time on multiple client stations. This paper presents design features and how the system works together in a distributed fashion.

  9. Signals Intelligence - Processing - Analysis - Classification

    DTIC Science & Technology

    2009-10-01

    Example: Language identification from audio signals. In a certain mission, a set of languages seems important beforehand. These languages will – with a...Uebler, Ulla (2003) The Visualisation of Diverse Intelligence. In Proceedings NATO (Research and Technology Agency) conference on “Military Data

  10. 76 FR 17613 - Aviation Service Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-30

    ...) regarding audio visual warning systems (AVWS). OCAS, Inc. installs such technology under the trademark OCAS... frequencies to activate obstruction lighting and transmit audible warnings to aircraft on a potential... transmit audible warnings to pilots. We seek comment on operational, licensing, eligibility and equipment...

  11. The use of conferencing technologies to support drug policy group knowledge exchange processes: an action case approach.

    PubMed

    Househ, Mowafa Said; Kushniruk, Andre; Maclure, Malcolm; Carleton, Bruce; Cloutier-Fisher, Denise

    2011-04-01

    To describe experiences, lessons and the implications related to the use of conferencing technology to support three drug policy research groups within a three-year period, using the action case research method. An action case research field study was executed. Three different drug policy groups participated: research, educator, and decision-maker task groups. There were a total of 61 participants in the study. The study was conducted between 2004 and 2007. Each group used audio-teleconferencing, web-conferencing or both to support their knowledge exchange activities. Data were collected over three years and consisted of observation notes, interviews, and meeting transcripts. Content analysis was used to analyze the data using NIVIO qualitative data analysis software. The study found six key lessons regarding the impact of conferencing technologies on knowledge exchange within drug policy groups. We found that 1) groups adapt to technology to facilitate group communication, 2) web-conferencing communication is optimal under certain conditions, 3) audio conferencing is convenient, 4) web-conferencing forces group interactions to be "within text", 5) facilitation contributes to successful knowledge exchange, and 6) technology impacts information sharing. This study highlights lessons related to the use of conferencing technologies to support distant knowledge exchange within drug policy groups. Key lessons from this study can be used by drug policy groups to support successful knowledge exchange activities using conferencing technologies. 2010 Elsevier Ireland Ltd. All rights reserved.

  12. From Rain Tanks to Catchments: Use of Low-Impact Development To Address Hydrologic Symptoms of the Urban Stream Syndrome.

    PubMed

    Askarizadeh, Asal; Rippy, Megan A; Fletcher, Tim D; Feldman, David L; Peng, Jian; Bowler, Peter; Mehring, Andrew S; Winfrey, Brandon K; Vrugt, Jasper A; AghaKouchak, Amir; Jiang, Sunny C; Sanders, Brett F; Levin, Lisa A; Taylor, Scott; Grant, Stanley B

    2015-10-06

    Catchment urbanization perturbs the water and sediment budgets of streams, degrades stream health and function, and causes a constellation of flow, water quality, and ecological symptoms collectively known as the urban stream syndrome. Low-impact development (LID) technologies address the hydrologic symptoms of the urban stream syndrome by mimicking natural flow paths and restoring a natural water balance. Over annual time scales, the volumes of stormwater that should be infiltrated and harvested can be estimated from a catchment-scale water-balance given local climate conditions and preurban land cover. For all but the wettest regions of the world, a much larger volume of stormwater runoff should be harvested than infiltrated to maintain stream hydrology in a preurban state. Efforts to prevent or reverse hydrologic symptoms associated with the urban stream syndrome will therefore require: (1) selecting the right mix of LID technologies that provide regionally tailored ratios of stormwater harvesting and infiltration; (2) integrating these LID technologies into next-generation drainage systems; (3) maximizing potential cobenefits including water supply augmentation, flood protection, improved water quality, and urban amenities; and (4) long-term hydrologic monitoring to evaluate the efficacy of LID interventions.

  13. Audio-Enhanced Tablet Computers to Assess Children's Food Frequency From Migrant Farmworker Mothers.

    PubMed

    Kilanowski, Jill F; Trapl, Erika S; Kofron, Ryan M

    2013-06-01

    This study sought to improve data collection in children's food frequency surveys for non-English speaking immigrant/migrant farmworker mothers using audio-enhanced tablet computers (ATCs). We hypothesized that by using technological adaptations, we would be able to improve data capture and therefore reduce lost surveys. This Food Frequency Questionnaire (FFQ), a paper-based dietary assessment tool, was adapted for ATCs and assessed consumption of 66 food items asking 3 questions for each food item: frequency, quantity of consumption, and serving size. The tablet-based survey was audio enhanced with each question "read" to participants, accompanied by food item images, together with an embedded short instructional video. Results indicated that respondents were able to complete the 198 questions from the 66 food item FFQ on ATCs in approximately 23 minutes. Compared with paper-based FFQs, ATC-based FFQs had less missing data. Despite overall reductions in missing data by use of ATCs, respondents still appeared to have difficulty with question 2 of the FFQ. Ability to score the FFQ was dependent on what sections missing data were located. Unlike the paper-based FFQs, no ATC-based FFQs were unscored due to amount or location of missing data. An ATC-based FFQ was feasible and increased ability to score this survey on children's food patterns from migrant farmworker mothers. This adapted technology may serve as an exemplar for other non-English speaking immigrant populations.

  14. Safety of the HyperSound® Audio System in Subjects with Normal Hearing.

    PubMed

    Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L

    2015-06-11

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  15. Safety of the HyperSound® Audio System in Subjects with Normal Hearing

    PubMed Central

    Mattson, Sara L.; Kappus, Brian A.; Seitzman, Robin L.

    2015-01-01

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions. PMID:26779330

  16. "Travelers In The Night" in the Old and New Media

    NASA Astrophysics Data System (ADS)

    Grauer, Albert D.

    2015-11-01

    "Travelers in the Night" is a series of 2 minute audio programs based on current research in astronomy and the space sciences.After more than a year of submitting “Travelers In The Night” 2 minute audio pieces to NPR and Community Radio stations with limited success, a parallel effort was initiated by posting the pieces as audio podcasts on Spreaker.com and iTunes.The classic media dispenses programming whose content and schedule is determined by editors and station managers. Riding the wave of new technology, people from every demographic group across the globe are selecting what, when, and how they receive information and entertainment. This change is significant with the Pew Research Center reporting that currently more than 60% of Facebook and Twitter users now get their news and/or links to stories from these sources. What remains constant is the public’s interest in astronomy and space.This poster presents relevant statistics and a discussion of the initial results of these two parallel efforts.

  17. [Media for 21st century--towards human communication media].

    PubMed

    Harashima, H

    2000-05-01

    Today, with the approach of the 21st century, attention is focused on multi-media communications combining computer, visual and audio technologies. This article discusses the communication media target and the technological problems constituting the nucleus of multi-media. The communication media is becoming an environment from which no one can escape. Since the media has such a great power, what is needed now is not to predict the future technologies, but to estimate the future world and take to responsibility for future environments.

  18. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  19. Magnetic Recording.

    ERIC Educational Resources Information Center

    Lowman, Charles E.

    A guide to the technology of magnetic recorders used in such fields as audio recording, broadcast and closed-circuit television, instrumentation recording, and computer data systems is presented. Included are discussions of applications, advantages, and limitations of magnetic recording, its basic principles and theory of operation, and its…

  20. Distance-Learning Technologies: Curriculum Equalizers in Rural and Small Schools.

    ERIC Educational Resources Information Center

    Barker, Bruce O.

    1986-01-01

    Discusses potential of new and advancing distance learning methods for meeting educational reform mandates for increased curricular offerings in rural schools. Describes specific successful programs now using interactive television via satellite, audio teleconferencing, videotapes, and microcomputer linking. Provides names, addresses, and…

  1. Evaluation of Side Stream Filtration Technology at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Brian K.

    2014-08-01

    This technology evaluation was performed by Pacific Northwest National Laboratory and Oak Ridge National Laboratory on behalf of the Federal Energy Management Program. The objective was to quantify the benefits side stream filtration provides to a cooling tower system. The evaluation assessed the performance of an existing side stream filtration system at a cooling tower system at Oak Ridge National Laboratory’s Spallation Neutron Source research facility. This location was selected because it offered the opportunity for a side-by-side comparison of a system featuring side stream filtration and an unfiltered system.

  2. Harnessing the Potential of ICTs for Literacy Teaching and Learning: Effective Literacy and Numeracy Programmes Using Radio, TV, Mobile Phones, Tablets, and Computers

    ERIC Educational Resources Information Center

    Hanemann, Ulrike, Ed.

    2014-01-01

    Different technologies have been used for decades to support adult education and learning. These include radio, television and audio and video cassettes. More recently digital ICTs such as computers, tablets, e-books, and mobile technology have spread at great speed and also found their way into the teaching and learning of literacy and numeracy…

  3. Chemistry in the Two-Year College. Proceedings from Two-Year College Chemistry Conference and Papers of Special Interest to the Two-Year College Chemistry Teacher. 1971 No. 1.

    ERIC Educational Resources Information Center

    Chapman, Kenneth, Ed.

    In this publication, issued twice per year, four major topics are discussed: (1) chemistry course content, including chemistry for nonscience students and nurses; (2) using media in chemistry, such as behavioral objectives and audio-tutorial aids; (3) chemical technology, with emphasis on the Chemical Technology Curriculum Project (Chem TeC); and…

  4. Tactile Instrument for Aviation

    DTIC Science & Technology

    2000-07-30

    response times using 8 tactor locations was repeated with a dual memory /tracking task or an air combat simulation to evaluate the effectiveness of the...Global Positioning/Inertial Navigation System technologies into a single system for evaluation in an UH-60 Helicopter. A 10-event test operation was... evaluation of the following technology areas need to be pursued: • Integration of tactile instruments with helmet mounted displays and 3D audio displays

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoiber, Marcus H.; Brown, James B.

    This software implements the first base caller for nanopore data that calls bases directly from raw data. The basecRAWller algorithm has two major advantages over current nanopore base calling software: (1) streaming base calling and (2) base calling from information rich raw signal. The ability to perform truly streaming base calling as signal is received from the sequencer can be very powerful as this is one of the major advantages of this technology as compared to other sequencing technologies. As such enabling as much streaming potential as possible will be incredibly important as this technology continues to become more widelymore » applied in biosciences. All other base callers currently employ the Viterbi algorithm which requires the whole sequence to employ the complete base calling procedure and thus precludes a natural streaming base calling procedure. The other major advantage of the basecRAWller algorithm is the prediction of bases from raw signal which contains much richer information than the segmented chunks that current algorithms employ. This leads to the potential for much more accurate base calls which would make this technology much more valuable to all of the growing user base for this technology.« less

  6. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  7. Weaving together peer assessment, audios and medical vignettes in teaching medical terms

    PubMed Central

    Khan, Lateef M.

    2015-01-01

    Objectives The current study aims at exploring the possibility of aligning peer assessment, audiovisuals, and medical case-report extracts (vignettes) in medical terminology teaching. In addition, the study wishes to highlight the effectiveness of audio materials and medical history vignettes in preventing medical students' comprehension, listening, writing, and pronunciation errors. The study also aims at reflecting the medical students' attitudes towards the teaching and learning process. Methods The study involved 161 medical students who received an intensive medical terminology course through audio and medical history extracts. Peer assessment and formative assessment platforms were applied through fake quizzes in a pre- and post-test manner. An 18-item survey was distributed amongst students to investigate their attitudes and feedback towards the teaching and learning process. Quantitative and qualitative data were analysed using the SPSS software. Results The students did better in the posttests than on the pretests for both the quizzes of audios and medical vignettes showing a t-test of -12.09 and -13.60 respectively. Moreover, out of the 133 students, 120 students (90.22%) responded to the survey questions. The students gave positive attitudes towards the application of audios and vignettes in the teaching and learning of medical terminology and towards the learning process. Conclusions The current study revealed that the teaching and learning of medical terminology have more room for the application of advanced technologies, effective assessment platforms, and active learning strategies in higher education. It also highlights that students are capable of carrying more responsibilities of assessment, feedback, and e-learning. PMID:26637986

  8. Noise Prediction Module for Offset Stream Nozzles

    NASA Technical Reports Server (NTRS)

    Henderson, Brenda S.

    2011-01-01

    A Modern Design of Experiments (MDOE) analysis of data acquired for an offset stream technology was presented. The data acquisition and concept development were funded under a Supersonics NRA NNX07AC62A awarded to Dimitri Papamoschou at University of California, Irvine. The technology involved the introduction of airfoils in the fan stream of a bypass ratio (BPR) two nozzle system operated at transonic exhaust speeds. The vanes deflected the fan stream relative to the core stream and resulted in reduced sideline noise for polar angles in the peak jet noise direction. Noise prediction models were developed for a range of vane configurations. The models interface with an existing ANOPP module and can be used or future system level studies.

  9. Facial and Periorbital Cellulitis due to Skin Peeling with Jet Stream by an Unauthorized Person.

    PubMed

    Kaptanoglu, Asli Feride; Mullaaziz, Didem; Suer, Kaya

    2014-01-01

    Technologies and devices for cosmetic procedures are developing with each passing day. However, increased and unauthorized use of such emerging technologies may also lead to increases in unexpected results and complications as well. Here, we report a case of facial cellulitis after a "beauty parlor" session of skin cleaning with jet stream peeling device in 19-year old female patient for the first time. Complications due to improper and unauthorized use of jet stream peeling devices may also cause doubts about the safety and impair the reputation of the technology as well. In order to avoid irreversible complications, local authorities should follow the technology and update the regulations where the dermatologists should take an active role.

  10. Learning diagnostic models using speech and language measures.

    PubMed

    Peintner, Bart; Jarrold, William; Vergyriy, Dimitra; Richey, Colleen; Tempini, Maria Luisa Gorno; Ogar, Jennifer

    2008-01-01

    We describe results that show the effectiveness of machine learning in the automatic diagnosis of certain neurodegenerative diseases, several of which alter speech and language production. We analyzed audio from 9 control subjects and 30 patients diagnosed with one of three subtypes of Frontotemporal Lobar Degeneration. From this data, we extracted features of the audio signal and the words the patient used, which were obtained using our automated transcription technologies. We then automatically learned models that predict the diagnosis of the patient using these features. Our results show that learned models over these features predict diagnosis with accuracy significantly better than random. Future studies using higher quality recordings will likely improve these results.

  11. Audio Restoration

    NASA Astrophysics Data System (ADS)

    Esquef, Paulo A. A.

    The first reproducible recording of human voice was made in 1877 on a tinfoil cylinder phonograph devised by Thomas A. Edison. Since then, much effort has been expended to find better ways to record and reproduce sounds. By the mid-1920s, the first electrical recordings appeared and gradually took over purely acoustic recordings. The development of electronic computers, in conjunction with the ability to record data onto magnetic or optical media, culminated in the standardization of compact disc format in 1980. Nowadays, digital technology is applied to several audio applications, not only to improve the quality of modern and old recording/reproduction techniques, but also to trade off sound quality for less storage space and less taxing transmission capacity requirements.

  12. ATS-6 - Television Relay Using Small Terminals Experiment

    NASA Technical Reports Server (NTRS)

    Miller, J. E.

    1975-01-01

    The Television Relay Using Small Terminals (TRUST) Experiment was designed to advance and promote the technology of broadcasting satellites. A constant envelope television FM signal was transmitted at C band to the ATS-6 earth coverage horn and retransmitted at 860 MHz through the 9-m antenna to a low-cost direct-readout ground station. The experiment demonstrated that high-quality television and audio can be received by low-cost direct-receive ground stations. Predetection bandwidths significantly less than predicted by Carson's rule can be utilized with minimal degradation of either monochrome or color pictures. Two separate techniques of dual audio channel transmission have been demonstrated to be suitable for low-cost applications.

  13. Automated Assessment of Child Vocalization Development Using LENA

    ERIC Educational Resources Information Center

    Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-01-01

    Purpose: To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method: Assessment was based on full-day audio…

  14. Delivery Systems for Distance Education. ERIC Digest.

    ERIC Educational Resources Information Center

    Schamber, Linda

    This ERIC digest provides a brief overview of the video, audio, and computer technologies that are currently used to deliver instruction for distance education programs. The video systems described include videoconferencing, low-power television (LPTV), closed-circuit television (CCTV), instructional fixed television service (ITFS), and cable…

  15. 34 CFR 388.22 - What priorities does the Secretary consider in making an award?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...

  16. 34 CFR 388.22 - What priorities does the Secretary consider in making an award?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...

  17. So Wide a Web, So Little Time.

    ERIC Educational Resources Information Center

    McConville, David; And Others

    1996-01-01

    Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…

  18. 36 CFR 1194.31 - Functional performance criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Functional performance... Performance Criteria § 1194.31 Functional performance criteria. (a) At least one mode of operation and... audio and enlarged print output working together or independently, or support for assistive technology...

  19. An Overview of Audacity

    ERIC Educational Resources Information Center

    Thompson, Douglas Earl

    2014-01-01

    This article is an overview of the open source audio-editing and -recording program, Audacity. Key features are noted, along with significant features not included in the program. A number of music and music technology concepts are identified that could be taught and/or reinforced through using Audacity.

  20. Telecommunications: Preservice, Inservice, Graduate, and Faculty. [SITE 2002 Section].

    ERIC Educational Resources Information Center

    Espinoza, Sue, Ed.

    This document contains the following papers on preservice, inservice, graduate, and faculty use of telecommunications from the SITE (Society for Information Technology & Teacher Education) 2002 conference: (1) "Alternative Classroom Observation through Two-Way Audio/Video Conferencing Systems" (Phyllis K. Adcock and William Austin);…

  1. 34 CFR 388.22 - What priorities does the Secretary consider in making an award?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...

  2. 34 CFR 388.22 - What priorities does the Secretary consider in making an award?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...

  3. 34 CFR 388.22 - What priorities does the Secretary consider in making an award?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...

  4. From rain tanks to catchments: Use of low-impact development to address hydrologic symptoms of the urban stream syndrome

    NASA Astrophysics Data System (ADS)

    Grant, S. B.

    2015-12-01

    Catchment urbanization perturbs the water and sediment budgets of streams, degrades stream health and function, and causes a constellation of flow, water quality and ecological symptoms collectively known as the urban stream syndrome. Low-impact development (LID) technologies address the hydrologic symptoms of the urban stream syndrome by mimicking natural flow paths and restoring a natural water balance. Over annual time scales, the volumes of storm water that should be infiltrated and harvested can be estimated from a catchment-scale water-balance given local climate conditions and pre-urban land cover. For all but the wettest regions of the world, the water balance predicts a much larger volume of storm water runoff should be harvested than infiltrated to restore stream hydrology to a pre-urban state. Efforts to prevent or reverse hydrologic symptoms associated with the urban stream syndrome will therefore require: (1) selecting the right mix of LID technologies that provide regionally tailored ratios of storm water harvesting and infiltration; (2) integrating these LID technologies into next-generation drainage systems; (3) maximizing potential co-benefits including water supply augmentation, flood protection, improved water quality, and urban amenities; and (4) long-term hydrologic monitoring to evaluate the efficacy of LID interventions.

  5. A Low-Cost, Passive Approach for Bacterial Growth and Distribution for Large-Scale Implementation of Bioaugmentation

    DTIC Science & Technology

    2012-07-01

    technologies with significant capital costs, secondary waste streams, the involvement of hazardous materials, and the potential for additional worker...or environmental exposure. A more ideal technology would involve lower capital costs, would not generate secondary waste streams, would be...of bioaugmentation technology in general include low risk to human health and the environment during implementation, low secondary waste generation

  6. Advancing the use of streaming media and digital media technologies at the Connecticut Department of Transportation.

    DOT National Transportation Integrated Search

    2014-03-27

    This final research report culminates a decade-long initiative to demonstrate and implement streaming media technologies at CONNDOT. This effort began in 2001 during an earlier related-study (SPR-2231) that concluded in 2006. This study (SPR-2254) re...

  7. Australian DefenceScience. Volume 16, Number 1, Autumn

    DTIC Science & Technology

    2008-01-01

    are carried via VOIP technology, and multicast IP traffic for audio -visual communications is also supported. The SSATIN system overall is seen to...Artificial Intelligence and Soft Computing Palma de Mallorca, Spain http://iasted.com/conferences/home-628.html 1 - 3 Sep 2008 Visualisation , Imaging and

  8. New Media. [SITE 2001 Section].

    ERIC Educational Resources Information Center

    McNeil, Sara, Ed.

    This document contains the following papers on new media from the SITE (Society for Information Technology & Teacher Education) 2001 conference: "Interactive Multimedia Problem-Based Learning: Evaluating Its Use in Pre-Service Teacher Education" (Peter Albion); "Digital Audio Production for the Web" (Jeffrey W. Bauer and Marianne T. Bauer);…

  9. Podcasting the Sciences: A Practical Overview

    ERIC Educational Resources Information Center

    Barsky, Eugene; Lindstrom, Kevin

    2008-01-01

    University science education has been undergoing great amount of change since the commercialization of the Internet a decade ago. Mobile technologies in science education can encompass more than the proximal teaching and learning environment. Podcasting, for example, allows audio content from user-selected feeds to be automatically downloaded to…

  10. An Investigation of Technological Innovation: Interactive Television.

    ERIC Educational Resources Information Center

    Robinson, Rhonda S.

    A 5-year case study was implemented to evaluate the two-way Carroll Instructional Television Consortium, which utilizes a cable television network serving four school districts in Illinois. This network permits simultaneous video and audio interactive communication among four high schools. The naturalistic inquiry method employed included…

  11. Doctoral Research in Library Media; Completed and Underway 1970.

    ERIC Educational Resources Information Center

    Anderton, Ray L., Ed.; Mapes, Joseph L., Ed.

    Doctoral theses completed and doctoral theses underway in the subject area of instructional technology are listed in this bibliography under the subtitles of audio literacy, audiovisual techniques, computers in education, library media, media training, programed instruction, projected materials, simulation and games, systems approach, television,…

  12. Web server for priority ordered multimedia services

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  13. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  14. Ontology-Based Multimedia Authoring Tool for Adaptive E-Learning

    ERIC Educational Resources Information Center

    Deng, Lawrence Y.; Keh, Huan-Chao; Liu, Yi-Jen

    2010-01-01

    More video streaming technologies supporting distance learning systems are becoming popular among distributed network environments. In this paper, the authors develop a multimedia authoring tool for adaptive e-learning by using characterization of extended media streaming technologies. The distributed approach is based on an ontology-based model.…

  15. Technological and Vocational Education in Taiwan.

    ERIC Educational Resources Information Center

    Lee, Lung-Sheng

    Beyond the nine years of compulsory education, Taiwan has the following two additional streams in the educational system: general academic education (GAE) and technological and vocational education (TVE). TVE has the two key features of a complete system to ensure students' horizontal and vertical mobility and a main schooling stream, parallel to…

  16. Audio-Enhanced Tablet Computers to Assess Children’s Food Frequency From Migrant Farmworker Mothers

    PubMed Central

    Kilanowski, Jill F.; Trapl, Erika S.; Kofron, Ryan M.

    2014-01-01

    This study sought to improve data collection in children’s food frequency surveys for non-English speaking immigrant/migrant farmworker mothers using audio-enhanced tablet computers (ATCs). We hypothesized that by using technological adaptations, we would be able to improve data capture and therefore reduce lost surveys. This Food Frequency Questionnaire (FFQ), a paper-based dietary assessment tool, was adapted for ATCs and assessed consumption of 66 food items asking 3 questions for each food item: frequency, quantity of consumption, and serving size. The tablet-based survey was audio enhanced with each question “read” to participants, accompanied by food item images, together with an embedded short instructional video. Results indicated that respondents were able to complete the 198 questions from the 66 food item FFQ on ATCs in approximately 23 minutes. Compared with paper-based FFQs, ATC-based FFQs had less missing data. Despite overall reductions in missing data by use of ATCs, respondents still appeared to have difficulty with question 2 of the FFQ. Ability to score the FFQ was dependent on what sections missing data were located. Unlike the paper-based FFQs, no ATC-based FFQs were unscored due to amount or location of missing data. An ATC-based FFQ was feasible and increased ability to score this survey on children’s food patterns from migrant farmworker mothers. This adapted technology may serve as an exemplar for other non-English speaking immigrant populations. PMID:25343004

  17. Live Aircraft Encounter Visualization at FutureFlight Central

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John

    2018-01-01

    Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.

  18. Telemedicine using free voice over internet protocol (VoIP) technology.

    PubMed

    Miller, David J; Miljkovic, Nikola; Chiesa, Chad; Callahan, John B; Webb, Brad; Boedeker, Ben H

    2011-01-01

    Though dedicated videoteleconference (VTC) systems deliver high quality, low-latency audio and video for telemedical applications, they require expensive hardware and extensive infrastructure. The purpose of this study was to investigate free commercially available Voice over Internet Protocol (VoIP) software as a low cost alternative for telemedicine.

  19. Developing Student Presentation Skills in an Introductory-Level Chemistry Course with Audio Technology

    ERIC Educational Resources Information Center

    Fredricks, Susan M.; Tierney, John; Bodek, Matthew; Fredericks, Margaret

    2016-01-01

    The objective of this article is to explain and provide rubrics for science and communication faculty as a means to help nonscience students, in basic science classes, understand that proper communication and presentation skills are a necessity in all courses and future walks of life.

  20. Challenges in Transcribing Multimodal Data: A Case Study

    ERIC Educational Resources Information Center

    Helm, Francesca; Dooly, Melinda

    2017-01-01

    Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…

  1. The Mediated Classroom: A Systems Approach to Better University Instruction.

    ERIC Educational Resources Information Center

    Ranker, Richard A.

    1995-01-01

    Describes the design and equipment configuration of four mediated classrooms installed at a small university. Topics include audio, visual, and environmental subsystems; the teaching workstation; integration into learning, including teaching faculty how to use it and providing support services; and an instructional technology integration model.…

  2. Music Software and Young Children: Fun and Focused Instruction

    ERIC Educational Resources Information Center

    Peters, G. David

    2009-01-01

    Readers have experienced the acceleration in music technology developments in recent years. The ease with which students and teacher can access digital audio files, video clips of music performances, and online instructional resources is impressive. Creativity "environments" were developed in a game-like format for children to experiment with…

  3. Engaging Students with Pre-Recorded "Live" Reflections on Problem-Solving with "Livescribe" Pens

    ERIC Educational Resources Information Center

    Hickman, Mike

    2013-01-01

    This pilot study, involving PGCE primary student teachers, applies "Livescribe" pen technology to facilitate individual and group reflection on collaborative mathematical problem solving (Hickman 2011). The research question was: How does thinking aloud, supported by digital audio recording, support student teachers' understanding of…

  4. The Effectiveness of Classroom Capture Technology

    ERIC Educational Resources Information Center

    Ford, Maire B.; Burns, Colleen E.; Mitch, Nathan; Gomez, Melissa M.

    2012-01-01

    The use of classroom capture systems (systems that capture audio and video footage of a lecture and attempt to replicate a classroom experience) is becoming increasingly popular at the university level. However, research on the effectiveness of classroom capture systems in the university classroom has been limited due to the recent development and…

  5. Conferencing Tools and the Productivity Paradox

    ERIC Educational Resources Information Center

    Nibourg, Theodorus

    2005-01-01

    The previous report in this series discusses current attitudes to distance education technology, with specific reference to the counter-productive effects of learning management systems. The current paper pursues this theme in relation to the evolution of online audio-conferencing systems in DE, and revisits the notion of the "productivity…

  6. Reaching Out to Rural Learners. Rural Economy Series Bulletin 1.

    ERIC Educational Resources Information Center

    Further Education Unit, London (England).

    This bulletin explores sociological and technological problems encountered in a project undertaken by the East Devon College of Further Education using an audio teleconferencing system to deliver community college courses to the rural unemployed in English villages. The project team identified the characteristics of several types of rural Devon…

  7. Management of Audio-Visual Media Services. Part II. Practical Management Methods.

    ERIC Educational Resources Information Center

    Price, Robert V.

    1978-01-01

    This paper furnishes a framework that allows the local audiovisual administrator to develop a management system necessary for the instructional support of teaching through modern media and educational technology. The structure of this framework rests on organizational patterns which are explained in four categories: complete decentralization,…

  8. DIST/AVC Out-Put Definition.

    ERIC Educational Resources Information Center

    Wilkinson, Gene L.

    The first stage of development of a management information system for DIST/AVC (Division of Instructional Technology/Audio-Visual Center) is the definition of out-put units. Some constraints on the definition of output units are: 1) they should reflect goals of the organization, 2) they should reflect organizational structure and procedures, and…

  9. Things the Teacher of Your Media Utilization Course May Not Have Told You.

    ERIC Educational Resources Information Center

    Ekhaml, Leticia

    1995-01-01

    Discusses maintenance and safety information that may not be covered in a technology training program. Topics include computers, printers, televisions, video and audio equipment, electric roll laminators, overhead and slide projectors, equipment carts, power cords and outlets, batteries, darkrooms, barcode readers, Liquid Crystal Display units,…

  10. Inclusive Schooling: Are We There yet?

    ERIC Educational Resources Information Center

    Causton, Julie; Theoharis, George

    2013-01-01

    Today, when trying to find a way to an unfamiliar destination, many rely on global positioning systems, or GPS technology. "Recalibrating" and "Whenever possible make a legal U-turn" are now ubiquitous phrases in the audio backdrop to many car trips. One can think about modern-day inclusive education in similar terms. The…

  11. A Comparative Study of Compression Video Technology.

    ERIC Educational Resources Information Center

    Keller, Chris A.; And Others

    The purpose of this study was to provide an overview of compression devices used to increase the cost effectiveness of teleconferences by reducing satellite bandwidth requirements for the transmission of television pictures and accompanying audio signals. The main body of the report describes the comparison study of compression rates and their…

  12. Social Operational Information, Competence, and Participation in Online Collective Action

    ERIC Educational Resources Information Center

    Antin, Judd David

    2010-01-01

    Recent advances in interactive web technologies, combined with widespread broadband and mobile device adoption, have made online collective action commonplace. Millions of individuals work together to aggregate, annotate, and share digital text, audio, images, and video. Given the prevalence and importance of online collective action systems,…

  13. Survey on the Sources of Information in Science, Technology and Commerce in the State of Penang, Malaysia

    ERIC Educational Resources Information Center

    Tee, Lim Huck; Fong, Tang Wan

    1973-01-01

    Penang, Malaysia is undergoing rapid industrialization to stimulate its economy. A survey was conducted to determine what technical, scientific, and commercial information sources were available. Areas covered in the survey were library facilities, journals, commercial reference works and audio-visual materials. (DH)

  14. Where Did Distance Education Go Wrong?

    ERIC Educational Resources Information Center

    Baggaley, Jon

    2008-01-01

    Distance education (DE) practices around the world use a wide range of audio-visual technologies to overcome the lack of direct contact between teachers and students. These are not universally adopted by DE teachers, however, nor even encouraged by their institutions. This article discusses the organisational attitudes that can lead to outdated…

  15. Home and School Technology: Wired versus Wireless.

    ERIC Educational Resources Information Center

    Van Horn, Royal

    2001-01-01

    Presents results of informal research on smart homes and appliances, structured home wiring, whole-house audio/video distribution, hybrid cable, and wireless networks. Computer network wiring is tricky to install unless all-in-one jacketed cable is used. Wireless phones help installers avoid pre-wiring problems in homes and schools. (MLH)

  16. MIT Orients Course Materials Online to K-12

    ERIC Educational Resources Information Center

    Cavanagh, Sean

    2008-01-01

    Many science and mathematics educators across the country are taking advantage of a Web site created by the Massachusetts Institute of Technology (MIT), the famed research university located in Cambridge, Massachusetts, which offers free video, audio, and print lectures and course material taken straight from the school's classes. Those resources…

  17. 77 FR 37733 - Third Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... adaptive technology during test. Discuss results of proposed committee letter(s) to invite additional... to remove CVR Area Microphone requirements from standard in lieu of ED-112. FAA info letter... noise test requirement to combine with the vibration test. Other Business. Establish agenda for next...

  18. A Novel Method for Real-Time Audio Recording With Intraoperative Video.

    PubMed

    Sugamoto, Yuji; Hamamoto, Yasuyoshi; Kimura, Masayuki; Fukunaga, Toru; Tasaki, Kentaro; Asai, Yo; Takeshita, Nobuyoshi; Maruyama, Tetsuro; Hosokawa, Takashi; Tamachi, Tomohide; Aoyama, Hiromichi; Matsubara, Hisahiro

    2015-01-01

    Although laparoscopic surgery has become widespread, effective and efficient education in laparoscopic surgery is difficult. Instructive laparoscopy videos with appropriate annotations are ideal for initial training in laparoscopic surgery; however, the method we use at our institution for creating laparoscopy videos with audio is not generalized, and there have been no detailed explanations of any such method. Our objectives were to demonstrate the feasibility of low-cost simple methods for recording surgical videos with audio and to perform a preliminary safety evaluation when obtaining these recordings during operations. We devised a method for the synchronous recording of surgical video with real-time audio in which we connected an amplifier and a wireless microphone to an existing endoscopy system and its equipped video-recording device. We tested this system in 209 cases of laparoscopic surgery in operating rooms between August 2010 and July 2011 and prospectively investigated the results of the audiovisual recording method and examined intraoperative problems. Numazu City Hospital in Numazu city, Japan. Surgeons, instrument nurses, and medical engineers. In all cases, the synchronous input of audio and video was possible. The recording system did not cause any inconvenience to the surgeon, assistants, instrument nurse, sterilized equipment, or electrical medical equipment. Statistically significant differences were not observed between the audiovisual group and control group regarding the operating time, which had been divided into 2 slots-performed by the instructors or by trainees (p > 0.05). This recording method is feasible and considerably safe while posing minimal difficulty in terms of technology, time, and expense. We recommend this method for both surgical trainees who wish to acquire surgical skills effectively and medical instructors who wish to teach surgical skills effectively. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  19. Data-driven analysis of functional brain interactions during free listening to music and speech.

    PubMed

    Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming

    2015-06-01

    Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.

  20. Improving collaboration between Primary Care Research Networks using Access Grid technology.

    PubMed

    Nagykaldi, Zsolt; Fox, Chester; Gallo, Steve; Stone, Joseph; Fontaine, Patricia; Peterson, Kevin; Arvanitis, Theodoros

    2008-01-01

    Access Grid (AG) is an Internet2-driven, high performance audio-visual conferencing technology used worldwide by academic and government organisations to enhance communication, human interaction and group collaboration. AG technology is particularly promising for improving academic multi-centre research collaborations. This manuscript describes how the AG technology was utilised by the electronic Primary Care Research Network (ePCRN) that is part of the National Institutes of Health (NIH) Roadmap initiative to improve primary care research and collaboration among practice-based research networks (PBRNs) in the USA. It discusses the design, installation and use of AG implementations, potential future applications, barriers to adoption, and suggested solutions.

  1. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  2. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  3. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  4. Review of past, present, and future of recording technology

    NASA Astrophysics Data System (ADS)

    Al-Jibouri, Abdul-Rahman

    2003-03-01

    The revolution of information storage and recording has been advanced significantly over the past two decades. Since the development of computers in early 1950s by IBM, the data (information) was stored on magnetic disc by inducing magnetic flux to define the pit direction. The first disc was developed by IBM with diameter of 25inch to store around 10 kByte. After four decades, the disc drive has become more advanced by reducing the drive size, increasing ariel density and cost reduction. The introduction of new computer operating systems and the Internet resulted in the need to develop high ariel density in the 1990s. Therefore, the disc drive manufacturers were pushed harder to develop new technologies at low cost to survive the competitive market. The disc drives, which are based on media (where the data/information is stored) and the head (which will write and read data/information). The head and disc are separated and with the current technology the spacing between the disc and head is about 40nm. A new technology based on magnetic recording was developed to serve the audio market. This technology is called magnetic type, it is similar to the disc drive, but the media is based on tape rather than rigid disc. Another difference being the head and media are in direct contact. Magnetic tape was developed for audio application and a few years later this technology was extended to allow and accept another technology, called video. This allows consumers to record and view movies in their home. The magnetic tape also used the computer industries for back up data. Magnetic tape is still used in computers and has advanced further over the past decade, companies like Quantum Corp has developed digital linear tape.

  5. Paramedic student performance: comparison of online with on-campus lecture delivery methods.

    PubMed

    Hubble, Michael W; Richards, Michael E

    2006-01-01

    Colleges and universities are experiencing increasing demand for online courses in many healthcare disciplines, including emergency medical services (EMS). Development and implementation of online paramedic courses with the quality of education experienced in the traditional classroom setting is essential in order to maintain the integrity of the educational process. Currently, there is conflicting evidence of whether a significant difference exists in student performance between online and traditional nursing and allied health courses. However, there are no published investigations of the effectiveness of online learning by paramedic students. Performance of paramedic students enrolled in an online, undergraduate, research methods course is equivalent to the performance of students enrolled in the same course provided in a traditional, classroom environment. Academic performance, learning styles, and course satisfaction surveys were compared between two groups of students. The course content was identical for both courses and taught by the same instructor during the same semester. The primary difference between the traditional course and the online course was the method of lecture delivery. Lectures for the on-campus students were provided live in a traditional classroom setting using PowerPoint slides. Lectures for the online students were provided using the same PowerPoint slides with prerecorded streaming audio and video. A convenience sample of 23 online and 10 traditional students participated in this study. With the exception of two learning domains, the two groups of students exhibited similar learning styles as assessed using the Grasha-Riechmann Student Learning Style Scales instrument. The online students scored significantly lower in the competitive and dependent dimensions than did the on-campus students. Academic performance was similar between the two groups. The online students devoted slightly more time to the course than did the campus students, although this difference did not reach statistical significance. In general, the online students believed the online audio lectures were more effective than the traditional live lectures. Distance learning technology appears to be an effective mechanism for extending didactic paramedic education off-campus, and may be beneficial particularly to areas that lack paramedic training programs or adequate numbers of qualified instructors.

  6. Using Text Mining to Uncover Students' Technology-Related Problems in Live Video Streaming

    ERIC Educational Resources Information Center

    Abdous, M'hammed; He, Wu

    2011-01-01

    Because of their capacity to sift through large amounts of data, text mining and data mining are enabling higher education institutions to reveal valuable patterns in students' learning behaviours without having to resort to traditional survey methods. In an effort to uncover live video streaming (LVS) students' technology related-problems and to…

  7. Industrial-Strength Streaming Video.

    ERIC Educational Resources Information Center

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  8. Video streaming in nursing education: bringing life to online education.

    PubMed

    Smith-Stoner, Marilyn; Willer, Ann

    2003-01-01

    Distance education is a standard form of instruction for many colleges of nursing. Web-based course and program content has been delivered primarily through text-based presentations such as PowerPoint slides and Web search activities. However, the rapid pace of technological innovation is making available more sophisticated forms of delivery such as video streaming. High-quality video streams, created at the instructor's desktop or in basic recording studios, can be produced that build on PowerPoint or create new media for use on the Web. The technology required to design, produce, and upload short video-streamed course content objects to the Internet is described. The preparation of materials, suggested production guidelines, and examples of information presented via desktop video methods are presented.

  9. The Role of Corporate and Government Surveillance in Shifting Journalistic Information Security Practices

    ERIC Educational Resources Information Center

    Shelton, Martin L.

    2015-01-01

    Digital technologies have fundamentally altered how journalists communicate with their sources, enabling them to exchange information through social media as well as video, audio, and text chat. Simultaneously, journalists are increasingly concerned with corporate and government surveillance as a threat to their ability to speak with sources in…

  10. Digitizing the Past: A History Book on CD-ROM.

    ERIC Educational Resources Information Center

    Rosenzweig, Roy

    1993-01-01

    Describes the development of an American history book with interactive CD-ROM technology that includes text, pictures, graphs and charts, audio, and film. Topics discussed include the use of HyperCard software to link information; access to primary sources of information; greater student control over learning; and the concept of collaborative…

  11. The Full Monty: Locating Resources, Creating, and Presenting a Web Enhanced History Course.

    ERIC Educational Resources Information Center

    Bazillion, Richard J.; Braun, Connie L.

    2001-01-01

    Discusses how to develop a history course using the World Wide Web; course development software; full text digitized articles, electronic books, primary documents, images, and audio files; and computer equipment such as LCD projectors and interactive whiteboards. Addresses the importance of support for faculty using technology in teaching. (PAL)

  12. Using Screencasts to Enhance Assessment Feedback: Students' Perceptions and Preferences

    ERIC Educational Resources Information Center

    Marriott, Pru; Teoh, Lim Keong

    2012-01-01

    In the UK, assessment and feedback have been regularly highlighted by the National Student Survey as critical aspects that require improvement. An innovative approach to delivering feedback that has proved successful in non-business-related disciplines is the delivery of audio and visual feedback using screencast technology. The feedback on…

  13. CFOs Talk about Finances: Glimmers of Hope

    ERIC Educational Resources Information Center

    Antolovic, Laurie G.; Horvath, Albert G.; Plympton, Margaret F.

    2009-01-01

    In May 2009, three senior financial leaders in higher education--Laurie G. Antolovic, Albert G. Horvath, and Margaret F. Plympton--participated in a panel session, moderated by Philip J. Goldstein, at the EDUCAUSE Enterprise Information and Technology Conference. The three also shared their thoughts in an audio interview conducted by Gerry Bayne,…

  14. "The Source": An Alternate Reality Game to Spark STEM Interest and Learning among Underrepresented Youth

    ERIC Educational Resources Information Center

    Gilliam, Melissa; Bouris, Alida; Hill, Brandon; Jagoda, Patrick

    2016-01-01

    Alternate Reality Games (ARGs) are multiplayer role-playing games that use the real world as their primary platform and incorporate a range of media, including video, audio, email, mobile technologies, websites, live performance, and social networks. This paper describes the development, implementation, and player reception of "The…

  15. Overview of the Use of Media in Distance Education. I.E.T. Paper on Broadcasting No. 220.

    ERIC Educational Resources Information Center

    Bates, A. W.

    This paper reviews the use of different audio-visual media in distance education, including terrestrial broadcasting, cable satellite, videocassettes, audiocassettes, telephone teaching, viewdata, teletext, microcomputers, and interactive video. Trends in distance education are also summarized and related to trends in media technology development.…

  16. Culture & Technology[TM]. [CD-ROM].

    ERIC Educational Resources Information Center

    2000

    This three CD-ROM set is designed to integrate social studies and science. There are 1,300 lessons developed and field tested by curriculum specialists, teachers, and students over a period of 15 years. Using dramatic video, audio, and photos, students can make connections between diet and temperature, location and climate, safety and energy,…

  17. Interactive Videodisc as a Component in a Multi-Method Approach to Anatomy and Physiology.

    ERIC Educational Resources Information Center

    Wheeler, Donald A.; Wheeler, Mary Jane

    At Cuyahoga Community College (Ohio), computer-controlled interactive videodisc technology is being used as one of several instructional methods to teach anatomy and physiology. The system has the following features: audio-visual instruction, interaction with immediate feedback, self-pacing, fill-in-the-blank quizzes for testing total recall,…

  18. The Nature of Discourse as Students Collaborate on a Mathematics WebQuest

    ERIC Educational Resources Information Center

    Orme, Michelle P.; Monroe, Eula Ewing

    2005-01-01

    Students were audio taped while working in teams on a WebQuest. Although gender-segregated, each team included both fifth- and sixth-graders. Interactions from two tasks were analyzed according to categories (exploratory, cumulative, disputational, tutorial) defined by the Spoken Language and New Technology (SLANT) project (e.g., Wegerif &…

  19. iDocument: How Smartphones and Tablets Are Changing Documentation in Preschool and Primary Classrooms

    ERIC Educational Resources Information Center

    Parnell, Will; Bartlett, Jackie

    2012-01-01

    With the increased prevalence of smartphones, laptops, tablet computers, and other digital technologies, knowledge about and familiarity with the educational uses for these devices is important for early childhood teachers documenting children's learning. Teachers can use smartphones every day to take photos, record video and audio, and make…

  20. Lecture-Recording Technology in Higher Education: Exploring Lecturer and Student Views across the Disciplines

    ERIC Educational Resources Information Center

    Dona, Kulari Lokuge; Gregory, Janet; Pechenkina, Ekaterina

    2017-01-01

    This paper presents findings of an institutional case study investigating how students and lecturers experienced a new opt-out, fully integrated lecture-recording system which enabled audio and presentation screen capture. The study's focus is on how "traditional" students (generally characterised as young, enrolled full-time and…

  1. Application of Multimedia Technologies to Enhance Distance Learning

    ERIC Educational Resources Information Center

    Buckley, Wendy; Smith, Alexandra

    2008-01-01

    Educators' use of multimedia enhances the online learning experience by presenting content in a combination of audio, video, graphics, and text in various formats to address a range of student learning styles. Many personnel preparation programs in visual impairments have turned to online education to serve students over a larger geographic area.…

  2. Profcasts and Class Attendance--Does Year in Program Matter?

    ERIC Educational Resources Information Center

    Holbrook, Jane; Dupont, Christine

    2009-01-01

    The use of technology to capture the audio and visual elements of lectures, to engage students in course concepts, and to provide feedback to assignments has become a mainstream practice in higher education through podcasting and lecture capturing mechanisms. Instructors can create short podcasts or videos to produce "nuggets" of information for…

  3. Using Blogs to Improve Differentiated Instruction

    ERIC Educational Resources Information Center

    Colombo, Michaela W.; Colombo, Paul D.

    2007-01-01

    The authors discuss how the instructional impact of science teachers can be extended by using blogs, a technology popular among students that allows teachers to differentiate their instruction for students with diverse needs. Software now makes it easy for teachers to establish class blogs, Web sites that contain text, audio, and video postings on…

  4. Examination of Tablet Usage by 4 Years Old Pre-School Student

    ERIC Educational Resources Information Center

    Bengisoy, Ayse

    2017-01-01

    Accurate usage of tablet etc. devices in growth period is essential in terms of development performance. Tablet usage for education and teaching supports audio-visual memory; however, an examination of the consequences of continued usage reveals serious problems. Technology is essential in terms of communication and reaching information in the…

  5. Social Technology as a New Medium in the Classroom

    ERIC Educational Resources Information Center

    Yan, Jeffrey

    2008-01-01

    New modes of everyday communication--textual, visual, audio and video--are already part of almost every high school and college student's social life. Can such social networking principles be effective in an educational setting? In this article, the author describes how the students at his school, Rhode Island School of Design (RISD), are provided…

  6. Golden Oldies: Using Digital Recording to Capture History

    ERIC Educational Resources Information Center

    Langhorst, Eric

    2008-01-01

    Analog audio recording has been around for a long time, but today's digital technology makes the process even easier, thanks to inexpensive equipment and free editing software. This year, the author's students at South Valley Junior High in Liberty, Missouri, will embark on an oral history project in which they will record their own family…

  7. MEMS microphone innovations towards high signal to noise ratios (Conference Presentation) (Plenary Presentation)

    NASA Astrophysics Data System (ADS)

    Dehé, Alfons

    2017-06-01

    After decades of research and more than ten years of successful production in very high volumes Silicon MEMS microphones are mature and unbeatable in form factor and robustness. Audio applications such as video, noise cancellation and speech recognition are key differentiators in smart phones. Microphones with low self-noise enable those functions. Backplate-free microphones enter the signal to noise ratios above 70dB(A). This talk will describe state of the art MEMS technology of Infineon Technologies. An outlook on future technologies such as the comb sensor microphone will be given.

  8. Newly available technologies present expanding opportunities for scientific and technical information exchange

    NASA Technical Reports Server (NTRS)

    Tolzman, Jean M.

    1993-01-01

    The potential for expanded communication among researchers, scholars, and students is supported by growth in the capabilities for electronic communication as well as expanding access to various forms of electronic interchange and computing capabilities. Increased possibilities for information exchange, collegial dialogue, collaboration, and access to remote resources exist as high-speed networks, increasingly powerful workstations, and large, multi-user computational facilities are more frequently linked and more commonly available. Numerous writers speak of the telecommunications revolution and its impact on the development and dissemination of knowledge and learning. One author offers the phrase 'Scholarly skywriting' to represent a new form of scientific communication that he envisions using electronic networks. In the United States (U.S.), researchers associated with the National Science Foundation (NSF) are exploring 'nationwide collaboratories' and 'digital collaboration.' Research supported by the U.S. National Aeronautics and Space Administration (NASA) points to a future where workstations with built-in audio, video monitors, and screen sharing protocols are used to support collaborations with colleagues located throughout the world. Instruments and sensors located worldwide will produce data streams that will be brought together, analyzed, and distributed as new findings. Researchers will have access to machines that can supply domain-specific information in addition to locator and directory assistance. New forms of electronic journals will emerge and provide opportunities for researchers and scientists to exchange information electronically and interactively in a range of structures and formats. Ultimately, the wide-scale use of these technologies in the dissemination of research results and the stimulation of collegial dialogue will change the way we represent and express our knowledge of the world. A new paradigm will evolve--perhaps a truly worldwide 'invisible college.'

  9. Independent transmission of sign language interpreter in DVB: assessment of image compression

    NASA Astrophysics Data System (ADS)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  10. Development of a Video Coding Scheme for Analyzing the Usability and Usefulness of Health Information Systems.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    Usability has been identified as a key issue in health informatics. Worldwide numerous projects have been carried out in an attempt to increase and optimize health system usability. Usability testing, involving observing end users interacting with systems, has been widely applied and numerous publications have appeared describing such studies. However, to date, fewer works have been published describing methodological approaches to analyzing the rich data stream that results from usability testing. This includes analysis of video, audio and screen recordings. In this paper we describe our work in the development and application of a coding scheme for analyzing the usability of health information systems. The phases involved in such analyses are described.

  11. Integrating Streaming Media to Web-based Learning: A Modular Approach.

    ERIC Educational Resources Information Center

    Miltenoff, Plamen

    2000-01-01

    Explains streaming technology and discusses how to integrate it into Web-based instruction based on experiences at St. Cloud State University (Minnesota). Topics include a modular approach, including editing, copyright concerns, digitizing, maintenance, and continuing education needs; the role of the library; and how streaming can enhance…

  12. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  13. Driver's behavioural changes with new intelligent transport system interventions at railway level crossings--A driving simulator study.

    PubMed

    Larue, Grégoire S; Kim, Inhi; Rakotonirainy, Andry; Haworth, Narelle L; Ferreira, Luis

    2015-08-01

    Improving safety at railway level crossings is an important issue for the Australian transport system. Governments, the rail industry and road organisations have tried a variety of countermeasures for many years to improve railway level crossing safety. New types of intelligent transport system (ITS) interventions are now emerging due to the availability and the affordability of technology. These interventions target both actively and passively protected railway level crossings and attempt to address drivers' errors at railway crossings, which are mainly a failure to detect the crossing or the train and misjudgement of the train approach speed and distance. This study aims to assess the effectiveness of three emerging ITS that the rail industry considers implementing in Australia: a visual in-vehicle ITS, an audio in-vehicle ITS, as well as an on-road flashing beacons intervention. The evaluation was conducted on an advanced driving simulator with 20 participants per trialled technology, each participant driving once without any technology and once with one of the ITS interventions. Every participant drove through a range of active and passive crossings with and without trains approaching. Their speed approach of the crossing, head movements and stopping compliance were measured. Results showed that driver behaviour was changed with the three ITS interventions at passive crossings, while limited effects were found at active crossings, even with reduced visibility. The on-road intervention trialled was unsuccessful in improving driver behaviour; the audio and visual ITS improved driver behaviour when a train was approaching. A trend toward worsening driver behaviour with the visual ITS was observed when no trains were approaching. This trend was not observed for the audio ITS intervention, which appears to be the ITS intervention with the highest potential for improving safety at passive crossings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Intern Abstract for Spring 2016

    NASA Technical Reports Server (NTRS)

    Gibson, William

    2016-01-01

    The Human Interface Branch - EV3 - is evaluating Organic lighting-emitting diodes (OLEDs) as an upgrade for current displays on future spacecraft. OLEDs have many advantages over current displays. Conventional displays require constant backlighting which draws a lot of power, but with OLEDs they generate light themselves. OLEDs are lighter, and weight is always a concern with space launches. OLEDs also grant greater viewing angles. OLEDs have been in the commercial market for almost ten years now. What is not known is how they will perform in a space-like environment; specifically deep space far away from the Earth's magnetosphere. In this environment, the OLEDs can be expected to experience vacuum and galactic radiation. The intern's responsibility has been to prepare the OLED for a battery of tests. Unfortunately, it will not be ready for testing at the end of the internship. That being said much progress has been made: a) Developed procedures to safely disassemble the tablet. b) Inventoried and identified critical electronic components. c) 3D printed a testing apparatus. d) Wrote software in Python that will test the OLED screen while being radiated. e) Built circuits to restart the tablet and the test pattern, and ensure it doesn't fall asleep during radiation testing. f) Built enclosure that will house all of the electronics Also, the intern has been working on a way to take messages from a simulated Caution and Warnings system, process said messages into packets, send audio packets to a multicast address that audio boxes are listening to, and output spoken audio. Currently, Cautions and Warnings use a tone to alert crew members of a situation, and then crew members have to read through their checklists to determine what the tone means. In urgent situations, EV3 wants to deliver concise and specific alerts to the crew to facilitate any mitigation efforts on their part. Significant progress was made on this project: a) Open channel with the simulated Caution and Warning system to acquire messages. b) Configure audio boxes. c) Grab pre-recorded audio files. d) Packetize the audio stream. A third project that was assigned to implement LED indicator modules for an Omnibus project. The Omnibus project is investigating better ways designing lighting for the interior of spacecraft-both spacecraft lighting and avionics box status lighting indication. The current scheme contains too much of the blue light spectrum that disrupts the sleep cycle. The LED indicator modules are to simulate the indicators running on a spacecraft. Lighting data will be gathered by human factors personal and use in a model underdevelopment to model spacecraft lighting. Significant progress was made on this project: Designed circuit layout a) Tested LEDs at LETF. b) Created GUI for the indicators. c) Created code for the Arduino to run that will illuminate the indicator modules.

  15. Home Telehealth Video Conferencing: Perceptions and Performance

    PubMed Central

    Morris, Greg; Pech, Joanne; Rechter, Stuart; Carati, Colin; Kidd, Michael R

    2015-01-01

    Background The Flinders Telehealth in the Home trial (FTH trial), conducted in South Australia, was an action research initiative to test and evaluate the inclusion of telehealth services and broadband access technologies for palliative care patients living in the community and home-based rehabilitation services for the elderly at home. Telehealth services at home were supported by video conferencing between a therapist, nurse or doctor, and a patient using the iPad tablet. Objective The aims of this study are to identify which technical factors influence the quality of video conferencing in the home setting and to assess the impact of these factors on the clinical perceptions and acceptance of video conferencing for health care delivery into the home. Finally, we aim to identify any relationships between technical factors and clinical acceptance of this technology. Methods An action research process developed several quantitative and qualitative procedures during the FTH trial to investigate technology performance and users perceptions of the technology including measurements of signal power, data transmission throughput, objective assessment of user perceptions of videoconference quality, and questionnaires administered to clinical users. Results The effectiveness of telehealth was judged by clinicians as equivalent to or better than a home visit on 192 (71.6%, 192/268) occasions, and clinicians rated the experience of conducting a telehealth session compared with a home visit as equivalent or better in 90.3% (489/540) of the sessions. It was found that the quality of video conferencing when using a third generation mobile data service (3G) in comparison to broadband fiber-based services was concerning as 23.5% (220/936) of the calls failed during the telehealth sessions. The experimental field tests indicated that video conferencing audio and video quality was worse when using mobile data services compared with fiber to the home services. As well, statistically significant associations were found between audio/video quality and patient comfort with the technology as well as the clinician ratings for effectiveness of telehealth. Conclusions These results showed that the quality of video conferencing when using 3G-based mobile data services instead of broadband fiber-based services was less due to failed calls, audio/ video jitter, and video pixilation during the telehealth sessions. Nevertheless, clinicians felt able to deliver effective services to patients at home using 3G-based mobile data services. PMID:26381104

  16. Voice of the Rivers: Quantifying the Sound of Rivers into Streamflow and Using the Audio for Education and Outreach

    NASA Astrophysics Data System (ADS)

    Santos, J.

    2014-12-01

    I have two goals with my research. 1. I proposed that sound recordings can be used to detect the amount of water flowing in a particular river, which could then be used to measure stream flow in rivers that have no instrumentation. My locations are in remote watersheds where hand instrumentation is the only means to collect data. I record 15 minute samples, at varied intervals, of the streams with a stereo microphone suspended above the river perpendicular to stream flow forming a "profile" of the river that can be compared to other stream-flow measurements of these areas over the course of a year. Through waveform analysis, I found a distinct voice for each river and I am quantifying the sound to track the flow based on amplitude, pitch, and wavelengths that these rivers produce. 2. Additionally, I plan to also use my DVD quality sound recordings with professional photos and HD video of these remote sites in education, outreach, and therapeutic venues. The outreach aspect of my research follows my goal of bridging communication between researchers and the public. Wyoming rivers are unique in that we export 85% of our water downstream. I would also like to take these recordings to schools, set up speakers in the four corners of a classroom and let the river flow as the teacher presents on water science. Immersion in an environment can help the learning experience of students. I have seen firsthand the power of drawing someone into an environment through sound and video. I will have my river sounds with me at AGU presented as an interactive touch-screen sound experience.

  17. NFL Films music scoring stage and control room space

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    NFL Films' new 200,000 sq. ft. corporate headquarters is home to an orchestral scoring stage used to record custom music scores to support and enhance their video productions. Part of the 90,000 sq. ft. of sound critical technical space, the music scoring stage and its associated control room are at the heart of the audio facilities. Driving the design were the owner's mandate for natural light, wood textures, and an acoustical environment that would support small rhythm sections, soloists, and a full orchestra. Being an industry leader in cutting-edge video and audio formats, the NFLF required that the technical spaces allow the latest in technology to be continually integrated into the infrastructure. Never was it more important for a project to hold true to the adage of ``designing from the inside out.'' Each audio and video space within the facility had to stand on its own with regard to user functionality, acoustical accuracy, sound isolation, noise control, and monitor presentation. A detailed look at the architectural and acoustical design challenges encountered and the solutions developed for the performance studio and the associated control room space will be discussed.

  18. A Stream Runs through IT: Using Streaming Video to Teach Information Technology

    ERIC Educational Resources Information Center

    Nicholson, Jennifer; Nicholson, Darren B.

    2010-01-01

    Purpose: The purpose of this paper is to report student and faculty perceptions from an introductory management information systems course that uses multimedia, specifically streaming video, as a vehicle for teaching students skills in Microsoft Excel and Access. Design/methodology/approach: Student perceptions are captured via a qualitative…

  19. Using Web 2.0 for Learning in the Community

    ERIC Educational Resources Information Center

    Mason, Robin; Rennie, Frank

    2007-01-01

    This paper describes the use of a range of Web 2.0 technologies to support the development of community for a newly formed Land Trust on the Isle of Lewis, in NW Scotland. The application of social networking tools in text, audio and video has several purposes: informal learning about the area to increase tourism, community interaction,…

  20. Human Language Technology: Opportunities and Challenges

    DTIC Science & Technology

    2005-01-01

    because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with

  1. Teachers' Perceptions about Teaching Multimodal Composition: The Case Study of Korean English Teachers at Secondary Schools

    ERIC Educational Resources Information Center

    Ryu, Jung; Boggs, George

    2016-01-01

    Twenty-first-century literacy is not confined to communication based on reading and writing only traditional printed texts. New kinds of literacies extend to multimedia projects and multimodal texts, which include visual, audio, and technological elements to create meanings. The purpose of this study is to explore how Korean secondary English…

  2. Using Videos and Multimodal Discourse Analysis to Study How Students Learn a Trade

    ERIC Educational Resources Information Center

    Chan, Selena

    2013-01-01

    The use of video to assist with ethnographical-based research is not a new phenomenon. Recent advances in technology have reduced the costs and technical expertise required to use videos for gathering research data. Audio-visual records of learning activities as they take place, allow for many non-vocal and inter-personal communication…

  3. Audio direct broadcast satellites

    NASA Technical Reports Server (NTRS)

    Miller, J. E.

    1983-01-01

    Satellite sound broadcasting is, as the name implies, the use of satellite techniques and technology to broadcast directly from space to low-cost, consumer-quality receivers the types of sound programs commonly received in the AM and FM broadcast bands. It would be a ubiquitous service available to the general public in the home, in the car, and out in the open.

  4. Anticipating the Exception, Not the Rule: Forming Policy for Student Use of Technology in the Classroom

    ERIC Educational Resources Information Center

    Becker, Daniel

    2013-01-01

    Students across institutions of higher learning come equipped with pocket-sized devices that allow them to record images, audio, and video from their classrooms, and instantaneously edit and share recorded content with a limitless audience. Prior to commencing instruction, postsecondary instructors are advised to learn the policy of their…

  5. Enhancing the Learning and Retention of Biblical Languages for Adult Students

    ERIC Educational Resources Information Center

    Morse, MaryKate

    2004-01-01

    Finding ways to reduce students' anxiety and maximize the value of learning Greek and Hebrew is a continual challenge for biblical language teachers. Some language teachers use technology tools such as web sites or CDs with audio lessons to improve the experience. Though these tools are helpful, this paper explores the value gained from…

  6. A Telepresence Learning Environment for Opera Singing: Distance Lessons Implementations over Internet2

    ERIC Educational Resources Information Center

    Alpiste Penalba, Francisco; Rojas-Rajs, Teresa; Lorente, Pedro; Iglesias, Francisco; Fernández, Joaquín; Monguet, Josep

    2013-01-01

    The Opera eLearning project developed a solution for opera singing distance lessons at the graduate level, using high bandwidth to deliver a quality audio and video experience that has been evaluated by singing teachers, chorus and orchestra directors, singers and other professional musicians. Prior to finding a technological model that suits the…

  7. Sounds Good to Me: Using Digital Audio in the Social Studies Classroom

    ERIC Educational Resources Information Center

    Lipscomb, George B.; Guenther, Lisa Marie; McLeod, Perry

    2007-01-01

    In social studies, the incorporation of technology presents some unique opportunities. With such innovations as blogging, interactive mapping, digital resources and others entering social studies classrooms, there is great potential for teachers, but it is hard to know where to begin. In this article, the authors focus on one familiar, yet rapidly…

  8. Physics Based Modeling in Design and Development for U.S. Defense Held in Denver, Colorado on November 14-17, 2011. Volume 2: Audio and Movie Files

    DTIC Science & Technology

    2011-11-17

    Mr. Frank Salvatore, High Performance Technologies FIXED AND ROTARY WING AIRCRAFT 13274 - “CREATE-AV DaVinci : Model-Based Engineering for Systems... Tools for Reliability Improvement and Addressing Modularity Issues in Evaluation and Physical Testing”, Dr. Richard Heine, Army Materiel Systems

  9. A Methodological Approach to Support Collaborative Media Creation in an E-Learning Higher Education Context

    ERIC Educational Resources Information Center

    Ornellas, Adriana; Muñoz Carril, Pablo César

    2014-01-01

    This article outlines a methodological approach to the creation, production and dissemination of online collaborative audio-visual projects, using new social learning technologies and open-source video tools, which can be applied to any e-learning environment in higher education. The methodology was developed and used to design a course in the…

  10. 3D Sound Interactive Environments for Blind Children Problem Solving Skills

    ERIC Educational Resources Information Center

    Sanchez, Jaime; Saenz, Mauricio

    2006-01-01

    Audio-based virtual environments have been increasingly used to foster cognitive and learning skills. A number of studies have also highlighted that the use of technology can help learners to develop effective skills such as motivation and self-esteem. This study presents the design and usability of 3D interactive environments for children with…

  11. A Research on a Student-Centred Teaching Model in an ICT-Based English Audio-Video Speaking Class

    ERIC Educational Resources Information Center

    Lu, Zhihong; Hou, Leijuan; Huang, Xiaohui

    2010-01-01

    The development and application of Information and Communication Technologies (ICT) in the field of Foreign Language Teaching (FLT) have had a considerable impact on the teaching methodologies in China. With an increasing emphasis on strengthening students' learning initiative and adopting a "student-centred" teaching concept in FLT,…

  12. The Efficacy of Screencasts to Address the Diverse Academic Needs of Students in a Large Lecture Course

    ERIC Educational Resources Information Center

    Pinder-Grover, Tershia; Green, Katie R.; Millunchick, Joanna Mirecki

    2011-01-01

    In large lecture courses, it can be challenging for instructors to address student misconceptions, supplement background knowledge, and identify ways to motivate the various interests of all students during the allotted class time. Instructors can harness instructional technology such as screencasts, recordings that capture audio narration along…

  13. Keeping up with the Technologically Savvy Student: Student Perceptions of Audio Books

    ERIC Educational Resources Information Center

    Gray, H. Joey; Davis, Phillip; Liu, Xiao

    2012-01-01

    The current generation of college students is so adapted to the digital world that they have been labeled the multi-tasking generation (Foehr, 2006; Wallis, 2006). College students routinely use digital playback devices in their lives for entertainment and communication to the point that students being "plugged in" is a ubiquitous image.…

  14. Talk the Talk: Learner-Generated Podcasts as Catalysts for Knowledge Creation

    ERIC Educational Resources Information Center

    Lee, Mark J. W.; McLoughlin, Catherine; Chan, Anthony

    2008-01-01

    Podcasting allows audio content from one or more user-selected feeds or channels to be automatically downloaded to one's computer as it becomes available, then later transferred to a portable player for consumption at a convenient time and place. It is enjoying phenomenal growth in mainstream society, alongside other Web 2.0 technologies that…

  15. Multimedia Case-Based Support of Experiential Teacher Education: Critical Self Reflection and Dialogue in Multi-Cultural Contexts.

    ERIC Educational Resources Information Center

    McCurry, David S.

    This paper describes a qualitative study exploring the efficacy of using selected multimedia technologies to engage preservice and practicing teachers in critical dialogue. Visual representations, such as 360-degree panoramic views of classrooms hyperlinked to text descriptions, audio clips, and video of learning environments are used as anchor…

  16. The Use of an Information Brokering Tool in an Electronic Museum Environment.

    ERIC Educational Resources Information Center

    Zimmermann, Andreas; Lorenz, Andreas; Specht, Marcus

    When art and technology meet, a huge information flow has to be managed. The LISTEN project conducted by the Fraunhofer Institut in St. Augustin (Germany) augments every day environments with audio information. In order to distribute and administer this information in an efficient way, the Institute decided to employ an information brokering tool…

  17. After the Bell, Beyond the Walls

    ERIC Educational Resources Information Center

    Langhorst, Eric

    2007-01-01

    Today, anyone can publish text, audio, pictures, or video on the Web quickly and at no charge using blogs, wikis, podcasts, and videosharing sites like YouTube. An 8th grade American history class has taken advantage of these technologies to expand student learning. Students read books and blog about them with people who live in different states,…

  18. Using Voice Boards: Pedagogical Design, Technological Implementation, Evaluation and Reflections

    ERIC Educational Resources Information Center

    Yaneske, Elisabeth; Oates, Briony

    2011-01-01

    We present a case study to evaluate the use of a Wimba Voice Board to support asynchronous audio discussion. We discuss the learning strategy and pedagogic rationale when a Voice Board was implemented within an MA module for language learners, enabling students to create learning objects and facilitating peer-to-peer learning. Previously students…

  19. Using Voice Boards: Pedagogical Design, Technological Implementation, Evaluation and Reflections

    ERIC Educational Resources Information Center

    Yaneske, Elisabeth; Oates, Briony

    2010-01-01

    We present a case study to evaluate the use of a Wimba Voice Board to support asynchronous audio discussion. We discuss the learning strategy and pedagogic rationale when a Voice Board was implemented within an MA module for language learners, enabling students to create learning objects and facilitating peer-to-peer learning. Previously students…

  20. Use of Short Podcasts to Reinforce Learning Outcomes in Biology

    ERIC Educational Resources Information Center

    Aguiar, Cristina; Carvalho, Ana Amelia; Carvalho, Carla Joana

    2009-01-01

    Podcasts are audio or video files which can be automatically downloaded to one's computer when the episodes become available, then later transferred to a portable player for listening. The technology thereby enables the user to listen to and/or watch the content anywhere at any time. Formerly popular as radio shows, podcasting was rapidly explored…

  1. A Communication Device for Interfacing Slide/Audio Tape Programs with the Microcomputer for Computer-Assisted Self-Instruction.

    ERIC Educational Resources Information Center

    Hostetler, Jerry C.; Englert, Duwayne C.

    1987-01-01

    Presents description of an interface device which ties in microcomputers and slide/tape presentations for computer assisted instruction. Highlights include the use of this technology in an introductory undergraduate zoology course; a discussion of authoring languages with emphasis on SuperPILOT; and hardware and software design for the interface.…

  2. Pathways to Drug and Sexual Risk Behaviors among Detained Adolescents

    ERIC Educational Resources Information Center

    Voisin, Dexter R.; Neilands, Torsten B.; Salazar, Laura F.; Crosby, Richard; DiClemente, Ralph J.

    2008-01-01

    This study recruited 559 youths from detention centers (mean age was 15.4 years; 50.1% of detainees were girls) to investigate pathways that link witnessing community violence in the 12 months before detainment to drug and sexual risk behaviors in the two months preceding detainment. Through the use of audio-computer-assisted technology, data were…

  3. Downloaded Lectures Have Been Shown to Produce Better Assessment Outcomes

    ERIC Educational Resources Information Center

    Parslow, Graham R.

    2009-01-01

    With relevance to current students, the author has observed that when commuting by public transport, there is a near complete use of audio-visual devices by the "plugged-in" under 30 age group. New technology, new generation, and new allocations of time to work and study are combining to diminish lecture attendances. Some colleagues refuse to make…

  4. Telecommunications Technology and Education. A Study Identifying Appropriate Telecommunications Systems for Program Improvement in Postsecondary Vocational Education in Georgia. Final Report.

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Div. of Vocational Education.

    A study examined teleconferencing applications that can assist educators in meeting Georgia's postsecondary vocational education needs. Three forms of teleconferencing were studied--audio conferencing, computer conferencing, and video conferencing. The study included a literature review, two pilot studies, and a survey to identify the ways in…

  5. Technology Is the Answer, But What Was the Question? Audiotape vs. Videotape for Individualized Instruction.

    ERIC Educational Resources Information Center

    Tabachnick, Barbara Gerson; And Others

    1978-01-01

    In an evaluation of supplementary learning aids students were assigned to one of four learning conditions: (1) videotape plus worksheet, (2) audiotape plus worksheet, (3) combination of audio- and videotape plus worksheet, and (4) worksheet only. Results reported include test scores and ratings of helpfulness, as well as student preferences and…

  6. Here's What We Have to Say! Podcasting in the Early Childhood Classroom

    ERIC Educational Resources Information Center

    Berson, Ilene R.

    2009-01-01

    A podcast is an audio file published to the Internet for playback on mobile devices and personal computers; the meaning of the term has expanded to include video files, or "enhanced podcasts" as well. Many students are already engaged with digital technologies when they first step into early childhood classrooms. Children as young as…

  7. A La Carts: You Want Wireless Mobility? Have a COW

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    Computers on wheels, or COWs, combine the wireless technology of today with the audio/visual carts of yesteryear for an entirely new spin on mobility. Increasingly used by districts with laptop computing initiatives, COWs are among the hottest high-tech sellers in schools today, according to market research firm Quality Education Data. In this…

  8. How to Plug into Teleconferencing/Reach Out and Train Somebody.

    ERIC Educational Resources Information Center

    Jenkins, Thomas M.; Cushing, David

    1983-01-01

    Teleconferencing, as an interactive group communication through an electronic medium joining three or more people at two or more locations, can take one of three forms: audio, audiographic, or full-motion video. This multilocation technology is used in training and in conducting meetings and conferences; it works as a money- and time-saving tool.…

  9. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  10. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  11. CO 2 capture from IGCC gas streams using the AC-ABC process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagar, Anoop; McLaughlin, Elisabeth; Hornbostel, Marc

    The objective of this project was to develop a novel, low-cost CO 2 capture process from pre-combustion gas streams. The bench-scale work was conducted at the SRI International. A 0.15-MWe integrated pilot plant was constructed and operated for over 700 hours at the National Carbon Capture Center, Wilsonville, AL. The AC-ABC (ammonium carbonate-ammonium bicarbonate) process for capture of CO 2 and H 2S from the pre-combustion gas stream offers many advantages over Selexol-based technology. The process relies on the simple chemistry of the NH 3-CO 2-H 2O-H 2S system and on the ability of the aqueous ammoniated solution to absorbmore » CO 2 at near ambient temperatures and to release it as a high-purity, high-pressure gas at a moderately elevated regeneration temperature. It is estimated the increase in cost of electricity (COE) with the AC-ABC process will be ~ 30%, and the cost of CO 2 captured is projected to be less than $27/metric ton of CO 2 while meeting 90% CO 2 capture goal. The Bechtel Pressure Swing Claus (BPSC) is a complementary technology offered by Bechtel Hydrocarbon Technology Solutions, Inc. BPSC is a high-pressure, sub-dew-point Claus process that allows for nearly complete removal of H 2S from a gas stream. It operates at gasifier pressures and moderate temperatures and does not affect CO 2 content. When coupled with AC-ABC, the combined technologies allow a nearly pure CO 2 stream to be captured at high pressure, something which Selexol and other solvent-based technologies cannot achieve.« less

  12. Binding and unbinding the auditory and visual streams in the McGurk effect.

    PubMed

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  13. Effect of audio in-vehicle red light-running warning message on driving behavior based on a driving simulator experiment.

    PubMed

    Yan, Xuedong; Liu, Yang; Xu, Yongcun

    2015-01-01

    Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.

  14. Understanding the Perceptions of Network Gatekeepers on Bandwidth and Online Video Streams in Ahmadu Bello University, Nigeria

    ERIC Educational Resources Information Center

    Odigie, Imoisili Ojeime; Gbaje, Ezra Shiloba

    2017-01-01

    Online video streaming is a learning technology used in today's world and reliant on the availability of bandwidth. This research study sought to understand the perceptions of network gatekeepers about bandwidth and online video streams in Ahmadu Bello University, Nigeria. To achieve this, the interpretive paradigm and the Network Gatekeeping…

  15. Streaming Video--The Wave of the Video Future!

    ERIC Educational Resources Information Center

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  16. Improving stream studies with a small-footprint green lidar

    Treesearch

    Jim McKean; Dan Isaak; Wayne Wright

    2009-01-01

    Technology is changing how scientists and natural resource managers describe and study streams and rivers. A new generation of airborne aquatic-terrestrial lidars is being developed that can penetrate water and map the submerged topography inside a stream as well as the adjacent subaerial terrain and vegetation in one integrated mission. A leading example of these new...

  17. Final Report: Efficient Databases for MPC Microdata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael A. Bender; Martin Farach-Colton; Bradley C. Kuszmaul

    2011-08-31

    The purpose of this grant was to develop the theory and practice of high-performance databases for massive streamed datasets. Over the last three years, we have developed fast indexing technology, that is, technology for rapidly ingesting data and storing that data so that it can be efficiently queried and analyzed. During this project we developed the technology so that high-bandwidth data streams can be indexed and queried efficiently. Our technology has been proven to work data sets composed of tens of billions of rows when the data streams arrives at over 40,000 rows per second. We achieved these numbers evenmore » on a single disk driven by two cores. Our work comprised (1) new write-optimized data structures with better asymptotic complexity than traditional structures, (2) implementation, and (3) benchmarking. We furthermore developed a prototype of TokuFS, a middleware layer that can handle microdata I/O packaged up in an MPI-IO abstraction.« less

  18. Separation science and technology. Semiannual progress report, October 1993--March 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandegrift, G.F.; Aase, S.B.; Buchholz, B.

    1997-12-01

    This document reports on the work done by the Separations Science and Technology Programs of the Chemical Technology Division, Argonne National Laboratory (ANL), in the period October 1993-March 1994. This effort is mainly concerned with developing the TRUEX process for removing and concentrating actinides from acidic waste streams contaminated with transuranic (TRU) elements. The objectives of TRUEX processing are to recover valuable TRU elements and to lower disposal costs for the nonTRU waste product of the process. Other projects are underway with the objective of developing (1) evaporation technology for concentrating radioactive waste and product streams such as those generatedmore » by the TRUEX process, (2) treatment schemes for liquid wastes stored are being generated at ANL, (3) a process based on sorbing modified TRUEX solvent on magnetic beads to be used for separation of contaminants from radioactive and hazardous waste streams, and (4) a process that uses low-enriched uranium targets for production of {sup 99}Mo for nuclear medicine uses.« less

  19. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  20. The Study and Implementation of Text-to-Speech System for Agricultural Information

    NASA Astrophysics Data System (ADS)

    Zheng, Huoguo; Hu, Haiyan; Liu, Shihong; Meng, Hong

    The Broadcast and Television coverage has increased to more than 98% in china. Information services by radio have wide coverage, low cost, easy-to-grass-roots farmers to accept etc. characteristics. In order to play the better role of broadcast information service, as well as aim at the problem of lack of information resource in rural, we R & D the text-to-speech system. The system includes two parts, software and hardware device, both of them can translate text into audio file. The software subsystem was implemented basic on third-part middleware, and the hardware subsystem was realized with microelectronics technology. Results indicate that the hardware is better than software. The system has been applied in huailai city hebei province, which has conversed more than 8000 audio files as programming materials for the local radio station.

  1. Teleproctoring laparoscopic operations with off-the-shelf technology.

    PubMed

    Luttmann, D R; Jones, D B; Soper, N J

    1996-01-01

    Teleproctoring may be a viable approach to training surgeons in the near future. It may also be a superior form of instruction, providing for instantaneous visual and audio feed back to the participant. Conventional proctors are sometimes tempted to reach in and "help", thus infringing on the learning process of the participant. This is a problem that is averted by use of a teleproctoring system. Teleproctoring thereby challenges the proctor to expand the means by which he teaches. As new technologies mature teleproctoring may become the gold standard for teaching new surgical techniques.

  2. Optical Laser Technology, Specifically CD-ROM (Compact Disc - Read Only Memory) and Its Application to the Storage and Retrieval of Information.

    DTIC Science & Technology

    1987-06-01

    RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARIS-1963-A gT~ ILE CE NAVAL POSTGRADUATE SCHOOL Monterey, California ;.CTE THESIS uEOIB OPTICAL LASER... usedin it xr~awsd SECURITY CLASSIFICATION Of ’-IS5 PACTF All other edtoni are obsolete 0’ NC TA? 7RI UNCLASSIFIED S3CUNTY CLASSIFICATION OF THIS PA69 (WhI...where the resulting mixture of text, color graphics, animation , and audio can be achieved. This technology is in the formative stages, however it is

  3. Embedded Systems and TensorFlow Frameworks as Assistive Technology Solutions.

    PubMed

    Mulfari, Davide; Palla, Alessandro; Fanucci, Luca

    2017-01-01

    In the field of deep learning, this paper presents the design of a wearable computer vision system for visually impaired users. The Assistive Technology solution exploits a powerful single board computer and smart glasses with a camera in order to allow its user to explore the objects within his surrounding environment, while it employs Google TensorFlow machine learning framework in order to real time classify the acquired stills. Therefore the proposed aid can increase the awareness of the explored environment and it interacts with its user by means of audio messages.

  4. A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Shrum, Richard; Miller, Joel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Our work in virtual reality systems at NASA Ames Research Center includes the area of aurally-guided visual search, using specially-designed audio cues and spatial audio processing (also known as virtual or "3-D audio") techniques (Begault, 1994). Previous studies at Ames had revealed that use of 3-D audio for Traffic Collision Avoidance System (TCAS) advisories significantly reduced head-down time, compared to a head-down map display (0.5 sec advantage) or no display at all (2.2 sec advantage) (Begault, 1993, 1995; Begault & Pittman, 1994; see Wenzel, 1994, for an audio demo). Since the crew must keep their head up and looking out the window as much as possible when taxiing under low-visibility conditions, and the potential for "blunder" is increased under such conditions, it was sensible to evaluate the audio spatial cueing for a prototype audio ground collision avoidance warning (GCAW) system, and a 3-D audio guidance system. Results were favorable for GCAW, but not for the audio guidance system.

  5. The priming function of in-car audio instruction.

    PubMed

    Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh

    2018-05-01

    Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.

  6. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.

  7. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  8. Construction and updating of event models in auditory event processing.

    PubMed

    Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-02-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech.

    PubMed

    Crosse, Michael J; Lalor, Edmund C

    2014-04-01

    Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.

  10. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  11. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  12. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  13. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  14. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  15. KSC-2012-5017

    NASA Image and Video Library

    2012-09-06

    CAPE CANAVERAL, Fla. – During NASA's Innovation Expo at the Kennedy Space Center in Florida, William Merrill, of NASA's Communications Infrastructure Services Division, proposes an innovation that would make mission audio available by way of an Internet radio stream. Kennedy Kick-Start Chair Mike Conroy looks on from the left. As Kennedy continues developing programs and infrastructure to become a 21st century spaceport, many employees are devising ways to do their jobs better and more efficiently. On Sept. 6, 2012, 16 Kennedy employees pitched their innovative ideas for improving the center at the Kennedy Kick-Start event. The competition was part of a center-wide effort designed to increase exposure for innovative ideas and encourage their implementation. For more information, visit http://www.nasa.gov/centers/kennedy/news/kick-start_competition.html Photo credit: NASA/Gianni Woods

  16. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  17. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  18. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  19. Storytime Using Ipods: Using Technology to Reach All Learners

    ERIC Educational Resources Information Center

    Boeglin-Quintana, Brenda; Donovan, Loretta

    2013-01-01

    Many educators would agree that one way to enhance reading fluency is by being read to by fluent readers. The purpose of this study was to examine the impact of providing students with audio books via an iPod Shuffle during silent reading time at school. For six weeks, Kindergarten participants spent time either silent reading or listening to a…

  20. Improving Student Learning via Mobile Phone Video Content: Evidence from the BridgeIT India Project

    ERIC Educational Resources Information Center

    Wennersten, Matthew; Quraishy, Zubeeda Banu; Velamuri, Malathi

    2015-01-01

    Past efforts invested in computer-based education technology interventions have generated little evidence of affordable success at scale. This paper presents the results of a mobile phone-based intervention conducted in the Indian states of Andhra Pradesh and Tamil Nadu in 2012-13. The BridgeIT project provided a pool of audio-visual learning…

  1. Creating an Adaptive Technology Using a Cheminformatics System to Read Aloud Chemical Compound Names for People with Visual Disabilities

    ERIC Educational Resources Information Center

    Kamijo, Haruo; Morii, Shingo; Yamaguchi, Wataru; Toyooka, Naoki; Tada-Umezaki, Masahito; Hirobayashi, Shigeki

    2016-01-01

    Various tactile methods, such as Braille, have been employed to enhance the recognition ability of chemical structures by individuals with visual disabilities. However, it is unknown whether reading aloud the names of chemical compounds would be effective in this regard. There are no systems currently available using an audio component to assist…

  2. 26 CFR 1.482-7A - Methods to determine taxable income in connection with a cost sharing arrangement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... reasonable overhead costs attributable to the project. They also share the cost of a conference facility that... reasonable overhead costs attributable to the project. USP also incurs costs related to field testing of the... Unrelated Third Party (UTP) enter into a cost sharing arrangement to develop new audio technology. In the...

  3. 26 CFR 1.482-7A - Methods to determine taxable income in connection with a cost sharing arrangement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... reasonable overhead costs attributable to the project. They also share the cost of a conference facility that... reasonable overhead costs attributable to the project. USP also incurs costs related to field testing of the... Unrelated Third Party (UTP) enter into a cost sharing arrangement to develop new audio technology. In the...

  4. Brief Pictorial Description of New Mobile Technologies Used in Cultural Institutions in Japan

    ERIC Educational Resources Information Center

    Awano, Yumi

    2007-01-01

    Many Japanese museums and other cultural institutions have been exploring ways to enrich visitors' experiences using new digital devices. This paper briefly describes some examples in Japan, ranging from a PDA, a mobile phone, podcasting, and an audio guide speaker-equipped vest to a Quick Response (QR) code on a brochure for downloading a short…

  5. Activity-Based Costing Models for Alternative Modes of Delivering On-Line Courses

    ERIC Educational Resources Information Center

    Garbett, Chris

    2011-01-01

    In recent years there has been growth in online distance learning courses. This has been prompted by; new technology such as the Internet, mobile learning, video and audio conferencing: the explosion in student numbers in Higher Education, and the need for outreach to a world wide market. Web-based distance learning is seen as a solution to…

  6. "But They Won't Come to Lectures..." The Impact of Audio Recorded Lectures on Student Experience and Attendance

    ERIC Educational Resources Information Center

    Larkin, Helen E.

    2010-01-01

    The move to increasingly flexible platforms for student learning and experience through provision of online lecture recordings is often interpreted by educators as students viewing attendance at lectures as optional. The trend toward the use of this technology is often met with resistance from some academic staff who argue that student attendance…

  7. 26 CFR 1.482-7A - Methods to determine taxable income in connection with a cost sharing arrangement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... reasonable overhead costs attributable to the project. They also share the cost of a conference facility that... reasonable overhead costs attributable to the project. USP also incurs costs related to field testing of the... Unrelated Third Party (UTP) enter into a cost sharing arrangement to develop new audio technology. In the...

  8. 26 CFR 1.482-7A - Methods to determine taxable income in connection with a cost sharing arrangement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... reasonable overhead costs attributable to the project. They also share the cost of a conference facility that... reasonable overhead costs attributable to the project. USP also incurs costs related to field testing of the... Unrelated Third Party (UTP) enter into a cost sharing arrangement to develop new audio technology. In the...

  9. 26 CFR 1.482-7A - Methods to determine taxable income in connection with a cost sharing arrangement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... reasonable overhead costs attributable to the project. They also share the cost of a conference facility that... reasonable overhead costs attributable to the project. USP also incurs costs related to field testing of the... Unrelated Third Party (UTP) enter into a cost sharing arrangement to develop new audio technology. In the...

  10. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling.

    PubMed

    Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S

    2016-04-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.

  11. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling

    PubMed Central

    Xiao, Bo; Huang, Chewei; Imel, Zac E.; Atkins, David C.; Georgiou, Panayiotis; Narayanan, Shrikanth S.

    2016-01-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy—a key therapy quality index—from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training. PMID:28286867

  12. Adding Audio Supported Smartboard Lectures to an Introductory Astronomy Online Laboratory

    NASA Astrophysics Data System (ADS)

    Lahaise, U. G. L.

    2003-12-01

    SMART Board(TM) and RealProducer(R) Plus technologies were used to develop a series of narrated pre-lab introductory online lectures. Smartboard slides were created by capturing images from internet pages and power point slides, then annotated and saved as web pages using smartboard technology. Short audio files were recorded using the RealProducer Plus software which were then linked to individual slides. WebCT was used to deliver the online laboratory. Students in an Introductory Astronomy of the Solar System Online laboratory used the lectures to prepare for laboratory exercises. The narrated pre-lab lectures were added to six out of eight suitable laboratory exercises. A survey was given to the students to research their online laboratory experience, in general, and the impact of the narrated smartboard lectures on their learning success, specifically. Data were collected for two accelerated sessions. Results show that students find the online laboratory equally hard or harder than a separate online lecture. The accelerated format created great time pressure which negatively affected their study habits. About half of the students used the narrated pre-lab lectures consistently. Preliminary findings show that lab scores in the accelerated sessions were brought up to the level of full semester courses.

  13. State of Practice for Emerging Waste Conversion Technologies

    EPA Science Inventory

    New technologies to convert municipal and other waste streams into fuels and chemical commodities, termed conversion technologies, are rapidly developing. Conversion technologies are garnering increasing interest and demand due primarily to alternative energy initiatives. These t...

  14. Taking Science On-air with Google+

    NASA Astrophysics Data System (ADS)

    Gay, P.

    2014-01-01

    Cost has long been a deterrent when trying to stream live events to large audiences. While streaming providers like UStream have free options, they include advertising and typically limit broadcasts to originating from a single location. In the autumn of 2011, Google premiered a new, free, video streaming tool -- Hangouts on Air -- as part of their Google+ social network. This platform allows up to ten different computers to stream live content to an unlimited audience, and automatically archives that content to YouTube. In this article we discuss best practices for using this technology to stream events over the internet.

  15. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  16. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  17. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  18. The Audio Description as a Physics Teaching Tool

    ERIC Educational Resources Information Center

    Cozendey, Sabrina; Costa, Maria da Piedade

    2016-01-01

    This study analyses the use of audio description in teaching physics concepts, aiming to determine the variables that influence the understanding of the concept. One education resource was audio described. For make the audio description the screen was freezing. The video with and without audio description should be presented to students, so that…

  19. ``Recent experiences and future expectations in data storage technology''

    NASA Astrophysics Data System (ADS)

    Pfister, Jack

    1990-08-01

    For more than 10 years the conventional media for High Energy Physics has been 9 track magnetic tape in various densities. More recently, especially in Europe, the IBM 3480 technology has been adopted while in the United States, especially at Fermilab, 8 mm is being used by the largest experiments as a primary recording media and where possible they are using 8 mm for the production, analysis and distribution of data summary tapes. VHS and Digital Audio tape have recurrently appeared but seem to serve primarily as a back-up storage media. The reasons for what appear to be a radical departure are many. Economics (media and controllers are inexpensive), form factor (two gigabytes per shirt pocket), and convenience (fewer mounts/dismounts per minute) are dominant among the reasons. The traditional data media suppliers seem to have been content to evolve the traditional media at their own pace with only modest enhancements primarily in ``value engineering'' of extant products. Meanwhile, start-up companies providing small system and workstations sought other media both to reduce the price of their offerings and respond to the real need of lower cost back-up for lower cost systems. This happening in a market context where traditional computer systems vendors were leaving the tape market altogether or shifting to ``3480'' technology which has certainly created a climate for reconsideration and change. The newest data storage products, in most cases, are not coming from the technologies developed by the computing industry but by the audio and video industry. Just where these flopticals, opticals, 19 mm tape and the new underlying technologies, such as, ``digital paper'' may fit in the HEP computing requirement picture will be reviewed. What these technologies do for and to HEP will be discussed along with some suggestions for a methodology for tracking and evaluating extant and emerging technologies.

  20. Differentiated strategies for improving streaming service quality

    NASA Astrophysics Data System (ADS)

    An, Hui; Chen, Xin-Meng

    2005-02-01

    With the explosive growth of streaming services, users are becoming more and more sensitive to its quality of service. To handle these problems, the research community focuses of the application of caching and replication techniques. But most approaches try to find specific strategies of caching of replication that suit for streaming service characteristics and to design some kind of universal policy to deal with all streaming objects. This paper explores the combination of caching and replication for improving streaming service quality and demonstrates that it makes sense to incorporate two technologies. It provides a system model and discusses some related issues of how to determining a refreshable streaming object and which refreshment policies a refreshable object should use.

Top