Sample records for multiple audio streams

  1. Digital Multicasting of Multiple Audio Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell; Bullock, John

    2007-01-01

    The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.

  2. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  3. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2014-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  4. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2008-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  5. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... programming stream at no direct charge to listeners. In addition, a broadcast radio station must simulcast its analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The emergency...

  6. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  7. The challenges of archiving networked-based multimedia performances (Performance cryogenics)

    NASA Astrophysics Data System (ADS)

    Cohen, Elizabeth; Cooperstock, Jeremy; Kyriakakis, Chris

    2002-11-01

    Music archives and libraries have cultural preservation at the core of their charters. New forms of art often race ahead of the preservation infrastructure. The ability to stream multiple synchronized ultra-low latency streams of audio and video across a continent for a distributed interactive performance such as music and dance with high-definition video and multichannel audio raises a series of challenges for the architects of digital libraries and those responsible for cultural preservation. The archiving of such performances presents numerous challenges that go beyond simply recording each stream. Case studies of storage and subsequent retrieval issues for Internet2 collaborative performances are discussed. The development of shared reality and immersive environments generate issues about, What constitutes an archived performance that occurs across a network (in multiple spaces over time)? What are the families of necessary metadata to reconstruct this virtual world in another venue or era? For example, if the network exhibited changes in latency the performers most likely adapted. In a future recreation, the latency will most likely be completely different. We discuss the parameters of immersive environment acquisition and rendering, network architectures, software architecture, musical/choreographic scores, and environmental acoustics that must be considered to address this problem.

  8. Development and Assessment of Web Courses That Use Streaming Audio and Video Technologies.

    ERIC Educational Resources Information Center

    Ingebritsen, Thomas S.; Flickinger, Kathleen

    Iowa State University, through a program called Project BIO (Biology Instructional Outreach), has been using RealAudio technology for about 2 years in college biology courses that are offered entirely via the World Wide Web. RealAudio is a type of streaming media technology that can be used to deliver audio content and a variety of other media…

  9. Robust audio-visual speech recognition under noisy audio-video conditions.

    PubMed

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  10. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  11. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  12. Audio Steganography with Embedded Text

    NASA Astrophysics Data System (ADS)

    Teck Jian, Chua; Chai Wen, Chuah; Rahman, Nurul Hidayah Binti Ab.; Hamid, Isredza Rahmi Binti A.

    2017-08-01

    Audio steganography is about hiding the secret message into the audio. It is a technique uses to secure the transmission of secret information or hide their existence. It also may provide confidentiality to secret message if the message is encrypted. To date most of the steganography software such as Mp3Stego and DeepSound use block cipher such as Advanced Encryption Standard or Data Encryption Standard to encrypt the secret message. It is a good practice for security. However, the encrypted message may become too long to embed in audio and cause distortion of cover audio if the secret message is too long. Hence, there is a need to encrypt the message with stream cipher before embedding the message into the audio. This is because stream cipher provides bit by bit encryption meanwhile block cipher provide a fixed length of bits encryption which result a longer output compare to stream cipher. Hence, an audio steganography with embedding text with Rivest Cipher 4 encryption cipher is design, develop and test in this project.

  13. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    ERIC Educational Resources Information Center

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  14. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  15. Cross-Modal Approach for Karaoke Artifacts Correction

    NASA Astrophysics Data System (ADS)

    Yan, Wei-Qi; Kankanhalli, Mohan S.

    In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment κ= {κ (t) : κ (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (κ ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U '(t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.

  16. Cross-Modal Approach for Karaoke Artifacts Correction

    NASA Astrophysics Data System (ADS)

    Yan, Wei-Qi; Kankanhalli, Mohan S.

    In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment kappa= {kappa (t) : kappa (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (kappa ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U ' (t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.

  17. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  18. Eye movements while viewing narrated, captioned, and silent videos

    PubMed Central

    Ross, Nicholas M.; Kowler, Eileen

    2013-01-01

    Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357

  19. Constructing a Streaming Video-Based Learning Forum for Collaborative Learning

    ERIC Educational Resources Information Center

    Chang, Chih-Kai

    2004-01-01

    As web-based courses using videos have become popular in recent years, the issue of managing audio-visual aids has become pertinent. Generally, the contents of audio-visual aids may include a lecture, an interview, a report, or an experiment, which may be transformed into a streaming format capable of making the quality of Internet-based videos…

  20. About subjective evaluation of adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Brunnström, Kjell; Garcia, Narciso

    2015-03-01

    The usage of HTTP Adaptive Streaming (HAS) technology by content providers is increasing rapidly. Having available the video content in multiple qualities, using HAS allows to adapt the quality of downloaded video to the current network conditions providing smooth video-playback. However, the time-varying video quality by itself introduces a new type of impairment. The quality adaptation can be done in different ways. In order to find the best adaptation strategy maximizing users perceptual quality it is necessary to investigate about the subjective perception of adaptation-related impairments. However, the novelties of these impairments and their comparably long time duration make most of the standardized assessment methodologies fall less suited for studying HAS degradation. Furthermore, in traditional testing methodologies, the quality of the video in audiovisual services is often evaluated separated and not in the presence of audio. Nevertheless, the requirement of jointly evaluating the audio and the video within a subjective test is a relatively under-explored research field. In this work, we address the research question of determining the appropriate assessment methodology to evaluate the sequences with time-varying quality due to the adaptation. This was done by studying the influence of different adaptation related parameters through two different subjective experiments using a methodology developed to evaluate long test sequences. In order to study the impact of audio presence on quality assessment by the test subjects, one of the experiments was done in the presence of audio stimuli. The experimental results were subsequently compared with another experiment using the standardized single stimulus Absolute Category Rating (ACR) methodology.

  1. Tune in the Net with RealAudio.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1997-01-01

    Describes how to connect to the RealAudio Web site to download a player that provides sound from Web pages to the computer through streaming technology. Explains hardware and software requirements and provides addresses for other RealAudio Web sites are provided, including weather information and current news. (LRW)

  2. Method and apparatus for obtaining complete speech signals for speech recognition applications

    NASA Technical Reports Server (NTRS)

    Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)

    2009-01-01

    The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.

  3. Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.

    2010-01-01

    From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.

  4. Podcasting by Synchronising PowerPoint and Voice: What Are the Pedagogical Benefits?

    ERIC Educational Resources Information Center

    Griffin, Darren K.; Mitchell, David; Thompson, Simon J.

    2009-01-01

    The purpose of this study was to investigate the efficacy of audio-visual synchrony in podcasting and its possible pedagogical benefits. "Synchrony" in this study refers to the simultaneous playback of audio and video data streams, so that the transitions between presentation slides occur at "lecturer chosen" points in the audio commentary.…

  5. Rapid Development of Orion Structural Test Systems

    NASA Astrophysics Data System (ADS)

    Baker, Dave

    2012-07-01

    NASA is currently validating the Orion spacecraft design for human space flight. Three systems developed by G Systems using hardware and software from National Instruments play an important role in the testing of the new Multi- purpose crew vehicle (MPCV). A custom pressurization and venting system enables engineers to apply pressure inside the test article for measuring strain. A custom data acquisition system synchronizes over 1,800 channels of analog data. This data, along with multiple video and audio streams and calculated data, can be viewed, saved, and replayed in real-time on multiple client stations. This paper presents design features and how the system works together in a distributed fashion.

  6. Data streaming in telepresence environments.

    PubMed

    Lamboray, Edouard; Würmlin, Stephan; Gross, Markus

    2005-01-01

    In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system-the blue-c-we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.

  7. A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet

    ERIC Educational Resources Information Center

    Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.

    2006-01-01

    Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…

  8. Learning Across Senses: Cross-Modal Effects in Multisensory Statistical Learning

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms. PMID:21574745

  9. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  10. Streaming Media Seminar--Effective Development and Distribution of Streaming Multimedia in Education

    ERIC Educational Resources Information Center

    Mainhart, Robert; Gerraughty, James; Anderson, Kristine M.

    2004-01-01

    Concisely defined, "streaming media" is moving video and/or audio transmitted over the Internet for immediate viewing/listening by an end user. However, at Saint Francis University's Center of Excellence for Remote and Medically Under-Served Areas (CERMUSA), streaming media is approached from a broader perspective. The working definition includes…

  11. Audio stream classification for multimedia database search

    NASA Astrophysics Data System (ADS)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  12. 47 CFR 73.1201 - Station identification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... offerings. Television and Class A television broadcast stations may make these announcements visually or... multicast audio programming streams, in a manner that appropriately alerts its audience to the fact that it is listening to a digital audio broadcast. No other insertion between the station's call letters and...

  13. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  14. Data-driven analysis of functional brain interactions during free listening to music and speech.

    PubMed

    Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming

    2015-06-01

    Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.

  15. Delivering Instruction via Streaming Media: A Higher Education Perspective.

    ERIC Educational Resources Information Center

    Mortensen, Mark; Schlieve, Paul; Young, Jon

    2000-01-01

    Describes streaming media, an audio/video presentation that is delivered across a network so that it is viewed while being downloaded onto the user's computer, including a continuous stream of video that can be pre-recorded or live. Discusses its use for nontraditional students in higher education and reports on implementation experiences. (LRW)

  16. Remotely supported prehospital ultrasound: A feasibility study of real-time image transmission and expert guidance to aid diagnosis in remote and rural communities.

    PubMed

    Eadie, Leila; Mulhern, John; Regan, Luke; Mort, Alasdair; Shannon, Helen; Macaden, Ashish; Wilson, Philip

    2017-01-01

    Introduction Our aim is to expedite prehospital assessment of remote and rural patients using remotely-supported ultrasound and satellite/cellular communications. In this paradigm, paramedics are remotely-supported ultrasound operators, guided by hospital-based specialists, to record images before receiving diagnostic advice. Technology can support users in areas with little access to medical imaging and suboptimal communications coverage by connecting to multiple cellular networks and/or satellites to stream live ultrasound and audio-video. Methods An ambulance-based demonstrator system captured standard trauma and novel transcranial ultrasound scans from 10 healthy volunteers at 16 locations across the Scottish Highlands. Volunteers underwent brief scanning training before receiving expert guidance via the communications link. Ultrasound images were streamed with an audio/video feed to reviewers for interpretation. Two sessions were transmitted via satellite and 21 used cellular networks. Reviewers rated image and communication quality, and their utility for diagnosis. Transmission latency and bandwidth were recorded, and effects of scanner and reviewer experience were assessed. Results Appropriate views were provided in 94% of the simulated trauma scans. The mean upload rate was 835/150 kbps and mean latency was 114/2072 ms for cellular and satellite networks, respectively. Scanning experience had a significant impact on time to achieve a diagnostic image, and review of offline scans required significantly less time than live-streamed scans. Discussion This prehospital ultrasound system could facilitate early diagnosis and streamlining of treatment pathways for remote emergency patients, being particularly applicable in rural areas worldwide with poor communications infrastructure and extensive transport times.

  17. A digital audio/video interleaving system. [for Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Richards, R. W.

    1978-01-01

    A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.

  18. Multimodal Speaker Diarization.

    PubMed

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  19. Combining Live Video and Audio Broadcasting, Synchronous Chat, and Asynchronous Open Forum Discussions in Distance Education

    ERIC Educational Resources Information Center

    Teng, Tian-Lih; Taveras, Marypat

    2004-01-01

    This article outlines the evolution of a unique distance education program that began as a hybrid--combining face-to-face instruction with asynchronous online teaching--and evolved to become an innovative combination of synchronous education using live streaming video, audio, and chat over the Internet, blended with asynchronous online discussions…

  20. Real-Time Transmission and Storage of Video, Audio, and Health Data in Emergency and Home Care Situations

    NASA Astrophysics Data System (ADS)

    Barbieri, Ivano; Lambruschini, Paolo; Raggio, Marco; Stagnaro, Riccardo

    2007-12-01

    The increase in the availability of bandwidth for wireless links, network integration, and the computational power on fixed and mobile platforms at affordable costs allows nowadays for the handling of audio and video data, their quality making them suitable for medical application. These information streams can support both continuous monitoring and emergency situations. According to this scenario, the authors have developed and implemented the mobile communication system which is described in this paper. The system is based on ITU-T H.323 multimedia terminal recommendation, suitable for real-time data/video/audio and telemedical applications. The audio and video codecs, respectively, H.264 and G723.1, were implemented and optimized in order to obtain high performance on the system target processors. Offline media streaming storage and retrieval functionalities were supported by integrating a relational database in the hospital central system. The system is based on low-cost consumer technologies such as general packet radio service (GPRS) and wireless local area network (WLAN or WiFi) for lowband data/video transmission. Implementation and testing were carried out for medical emergency and telemedicine application. In this paper, the emergency case study is described.

  1. Securing Digital Audio using Complex Quadratic Map

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  2. SNR-adaptive stream weighting for audio-MES ASR.

    PubMed

    Lee, Ki-Seung

    2008-08-01

    Myoelectric signals (MESs) from the speaker's mouth region have been successfully shown to improve the noise robustness of automatic speech recognizers (ASRs), thus promising to extend their usability in implementing noise-robust ASR. In the recognition system presented herein, extracted audio and facial MES features were integrated by a decision fusion method, where the likelihood score of the audio-MES observation vector was given by a linear combination of class-conditional observation log-likelihoods of two classifiers, using appropriate weights. We developed a weighting process adaptive to SNRs. The main objective of the paper involves determining the optimal SNR classification boundaries and constructing a set of optimum stream weights for each SNR class. These two parameters were determined by a method based on a maximum mutual information criterion. Acoustic and facial MES data were collected from five subjects, using a 60-word vocabulary. Four types of acoustic noise including babble, car, aircraft, and white noise were acoustically added to clean speech signals with SNR ranging from -14 to 31 dB. The classification accuracy of the audio ASR was as low as 25.5%. Whereas, the classification accuracy of the MES ASR was 85.2%. The classification accuracy could be further improved by employing the proposed audio-MES weighting method, which was as high as 89.4% in the case of babble noise. A similar result was also found for the other types of noise.

  3. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    NASA Astrophysics Data System (ADS)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  4. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  5. Feature Representations for Neuromorphic Audio Spike Streams.

    PubMed

    Anumula, Jithendar; Neil, Daniel; Delbruck, Tobi; Liu, Shih-Chii

    2018-01-01

    Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset.

  6. Feature Representations for Neuromorphic Audio Spike Streams

    PubMed Central

    Anumula, Jithendar; Neil, Daniel; Delbruck, Tobi; Liu, Shih-Chii

    2018-01-01

    Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset. PMID:29479300

  7. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  8. Online Class Review: Using Streaming-Media Technology

    ERIC Educational Resources Information Center

    Loudon, Marc; Sharp, Mark

    2006-01-01

    We present an automated system that allows students to replay both audio and video from a large nonmajors' organic chemistry class as streaming RealMedia. Once established, this system requires no technical intervention and is virtually transparent to the instructor. This gives students access to online class review at any time. Assessment has…

  9. The Function of Consciousness in Multisensory Integration

    ERIC Educational Resources Information Center

    Palmer, Terry D.; Ramsey, Ashley K.

    2012-01-01

    The function of consciousness was explored in two contexts of audio-visual speech, cross-modal visual attention guidance and McGurk cross-modal integration. Experiments 1, 2, and 3 utilized a novel cueing paradigm in which two different flash suppressed lip-streams cooccured with speech sounds matching one of these streams. A visual target was…

  10. Video Streaming in Online Learning

    ERIC Educational Resources Information Center

    Hartsell, Taralynn; Yuen, Steve Chi-Yin

    2006-01-01

    The use of video in teaching and learning is a common practice in education today. As learning online becomes more of a common practice in education, streaming video and audio will play a bigger role in delivering course materials to online learners. This form of technology brings courses alive by allowing online learners to use their visual and…

  11. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  12. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    PubMed

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  14. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  15. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  16. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  17. Effects of Exposure to Advertisements on Audience Impressions

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hiroshi; Sato, Mie; Kasuga, Masao; Nagao, Yoshihide; Shono, Toru; Norose, Yuka; Oku, Ritsuya; Nogami, Akira; Miyazawa, Yoshitaka

    This study investigated effects of listening and/or watching commercial-messages (CMs) on audience impressions. We carried out experiments of TV advertisements presentation in conditions of audio only, video only, and audio-video. As results, we confirmed the following two effects: image-multiple effect, that is, the audience brings to mind various images that are not directly expressed in the content, and marking-up effect, that is, the audience concentrates on some images that are directly expressed in the content. The image-multiple effect, in particular, strongly appeared under the audio only condition. Next, we investigated changes in the following seven subjective responses; usage image, experience, familiarity, exclusiveness, feeling at home, affection, and willingness to buy, after exposure to advertisements under conditions of audio only and audio-video. As a result, noting that the image-multiple effect became stronger as the evaluation scores of the responses increased.

  18. Audio Spectrogram Representations for Processing with Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wyse, L.

    2017-05-01

    One of the decisions that arise when designing a neural network for any application is how the data should be represented in order to be presented to, and possibly generated by, a neural network. For audio, the choice is less obvious than it seems to be for visual images, and a variety of representations have been used for different applications including the raw digitized sample stream, hand-crafted features, machine discovered features, MFCCs and variants that include deltas, and a variety of spectral representations. This paper reviews some of these representations and issues that arise, focusing particularly on spectrograms for generating audio using neural networks for style transfer.

  19. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  20. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  1. Summarizing Audiovisual Contents of a Video Program

    NASA Astrophysics Data System (ADS)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  2. Structuring Broadcast Audio for Information Access

    NASA Astrophysics Data System (ADS)

    Gauvain, Jean-Luc; Lamel, Lori

    2003-12-01

    One rapidly expanding application area for state-of-the-art speech recognition technology is the automatic processing of broadcast audiovisual data for information access. Since much of the linguistic information is found in the audio channel, speech recognition is a key enabling technology which, when combined with information retrieval techniques, can be used for searching large audiovisual document collections. Audio indexing must take into account the specificities of audio data such as needing to deal with the continuous data stream and an imperfect word transcription. Other important considerations are dealing with language specificities and facilitating language portability. At Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), broadcast news transcription systems have been developed for seven languages: English, French, German, Mandarin, Portuguese, Spanish, and Arabic. The transcription systems have been integrated into prototype demonstrators for several application areas such as audio data mining, structuring audiovisual archives, selective dissemination of information, and topic tracking for media monitoring. As examples, this paper addresses the spoken document retrieval and topic tracking tasks.

  3. Neural network retuning and neural predictors of learning success associated with cello training.

    PubMed

    Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J

    2018-06-26

    The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.

  4. StreaMorph: A Case for Synthesizing Energy-Efficient Adaptive Programs Using High-Level Abstractions

    DTIC Science & Technology

    2013-08-12

    technique when switching from using eight cores to one core. 1. Introduction Real - time streaming of media data is growing in popularity. This includes...both capture and processing of real - time video and audio, and delivery of video and audio from servers; recent usage number shows over 800 million...source of data, when that source is a real - time source, and it is generally not necessary to get ahead of the sink. Even with real - time sources and sinks

  5. Using online handwriting and audio streams for mathematical expressions recognition: a bimodal approach

    NASA Astrophysics Data System (ADS)

    Medjkoune, Sofiane; Mouchère, Harold; Petitrenaud, Simon; Viard-Gaudin, Christian

    2013-01-01

    The work reported in this paper concerns the problem of mathematical expressions recognition. This task is known to be a very hard one. We propose to alleviate the difficulties by taking into account two complementary modalities. The modalities referred to are handwriting and audio ones. To combine the signals coming from both modalities, various fusion methods are explored. Performances evaluated on the HAMEX dataset show a significant improvement compared to a single modality (handwriting) based system.

  6. Robust Radio Broadcast Monitoring Using a Multi-Band Spectral Entropy Signature

    NASA Astrophysics Data System (ADS)

    Camarena-Ibarrola, Antonio; Chávez, Edgar; Tellez, Eric Sadit

    Monitoring media broadcast content has deserved a lot of attention lately from both academy and industry due to the technical challenge involved and its economic importance (e.g. in advertising). The problem pose a unique challenge from the pattern recognition point of view because a very high recognition rate is needed under non ideal conditions. The problem consist in comparing a small audio sequence (the commercial ad) with a large audio stream (the broadcast) searching for matches.

  7. Application Layer Multicast

    NASA Astrophysics Data System (ADS)

    Allani, Mouna; Garbinato, Benoît; Pedone, Fernando

    An increasing number of Peer-to-Peer (P2P) Internet applications rely today on data dissemination as their cornerstone, e.g., audio or video streaming, multi-party games. These applications typically depend on some support for multicast communication, where peers interested in a given data stream can join a corresponding multicast group. As a consequence, the efficiency, scalability, and reliability guarantees of these applications are tightly coupled with that of the underlying multicast mechanism.

  8. Atomization of metal (Materials Preparation Center)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Atomization of metal requires high pressure gas and specialized chambers for cooling and collecting the powders without contamination. The critical step for morphological control is the impingement of the gas on the melt stream. The video is a color video of a liquid metal stream being atomized by high pressure gas. This material was cast at the Ames Laboratory's Materials Preparation Center http://www.mpc.ameslab.gov WARNING - AUDIO IS LOUD.

  9. Reducing audio stimulus presentation latencies across studies, laboratories, and hardware and operating system configurations.

    PubMed

    Babjack, Destiny L; Cernicky, Brandon; Sobotka, Andrew J; Basler, Lee; Struthers, Devon; Kisic, Richard; Barone, Kimberly; Zuccolotto, Anthony P

    2015-09-01

    Using differing computer platforms and audio output devices to deliver audio stimuli often introduces (1) substantial variability across labs and (2) variable time between the intended and actual sound delivery (the sound onset latency). Fast, accurate audio onset latencies are particularly important when audio stimuli need to be delivered precisely as part of studies that depend on accurate timing (e.g., electroencephalographic, event-related potential, or multimodal studies), or in multisite studies in which standardization and strict control over the computer platforms used is not feasible. This research describes the variability introduced by using differing configurations and introduces a novel approach to minimizing audio sound latency and variability. A stimulus presentation and latency assessment approach is presented using E-Prime and Chronos (a new multifunction, USB-based data presentation and collection device). The present approach reliably delivers audio stimuli with low latencies that vary by ≤1 ms, independent of hardware and Windows operating system (OS)/driver combinations. The Chronos audio subsystem adopts a buffering, aborting, querying, and remixing approach to the delivery of audio, to achieve a consistent 1-ms sound onset latency for single-sound delivery, and precise delivery of multiple sounds that achieves standard deviations of 1/10th of a millisecond without the use of advanced scripting. Chronos's sound onset latencies are small, reliable, and consistent across systems. Testing of standard audio delivery devices and configurations highlights the need for careful attention to consistency between labs, experiments, and multiple study sites in their hardware choices, OS selections, and adoption of audio delivery systems designed to sidestep the audio latency variability issue.

  10. Aerospace Communications Security Technologies Demonstrated

    NASA Technical Reports Server (NTRS)

    Griner, James H.; Martzaklis, Konstantinos S.

    2003-01-01

    In light of the events of September 11, 2001, NASA senior management requested an investigation of technologies and concepts to enhance aviation security. The investigation was to focus on near-term technologies that could be demonstrated within 90 days and implemented in less than 2 years. In response to this request, an internal NASA Glenn Research Center Communications, Navigation, and Surveillance Aviation Security Tiger Team was assembled. The 2-year plan developed by the team included an investigation of multiple aviation security concepts, multiple aircraft platforms, and extensively leveraged datalink communications technologies. It incorporated industry partners from NASA's Graphical Weather-in-the-Cockpit research, which is within NASA's Aviation Safety Program. Two concepts from the plan were selected for demonstration: remote "black box," and cockpit/cabin surveillance. The remote "black box" concept involves real-time downlinking of aircraft parameters for remote monitoring and archiving of aircraft data, which would assure access to the data following the loss or inaccessibility of an aircraft. The cockpit/cabin surveillance concept involves remote audio and/or visual surveillance of cockpit and cabin activity, which would allow immediate response to any security breach and would serve as a possible deterrent to such breaches. The datalink selected for the demonstrations was VDL Mode 2 (VHF digital link), the first digital datalink for air-ground communications designed for aircraft use. VDL Mode 2 is beginning to be implemented through the deployment of ground stations and aircraft avionics installations, with the goal of being operational in 2 years. The first demonstration was performed December 3, 2001, onboard the LearJet 25 at Glenn. NASA worked with Honeywell, Inc., for the broadcast VDL Mode 2 datalink capability and with actual Boeing 757 aircraft data. This demonstration used a cockpitmounted camera for video surveillance and a coupling to the intercom system for audio surveillance. Audio, video, and "black box" data were simultaneously streamed to the ground, where they were displayed to a Glenn audience of senior management and aviation security team members.

  11. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  12. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    NASA Astrophysics Data System (ADS)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.

  13. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  14. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    PubMed

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  15. Strategies for Transporting Data Between Classified and Unclassified Networks

    DTIC Science & Technology

    2016-03-01

    datagram protocol (UDP) must be used. The UDP is typically used when speed is a higher priority than data integrity, such as in music or video streaming ...and the exit point of data are separate and can be tightly controlled. This does effectively prevent the comingling of data and is used in industry to...perform functions such as streaming video and audio from secure to insecure networks (ref. 1). A second disadvantage lies in the fact that the

  16. Reduction in time-to-sleep through EEG based brain state detection and audio stimulation.

    PubMed

    Zhuo Zhang; Cuntai Guan; Ti Eu Chan; Juanhong Yu; Aung Aung Phyo Wai; Chuanchu Wang; Haihong Zhang

    2015-08-01

    We developed an EEG- and audio-based sleep sensing and enhancing system, called iSleep (interactive Sleep enhancement apparatus). The system adopts a closed-loop approach which optimizes the audio recording selection based on user's sleep status detected through our online EEG computing algorithm. The iSleep prototype comprises two major parts: 1) a sleeping mask integrated with a single channel EEG electrode and amplifier, a pair of stereo earphones and a microcontroller with wireless circuit for control and data streaming; 2) a mobile app to receive EEG signals for online sleep monitoring and audio playback control. In this study we attempt to validate our hypothesis that appropriate audio stimulation in relation to brain state can induce faster onset of sleep and improve the quality of a nap. We conduct experiments on 28 healthy subjects, each undergoing two nap sessions - one with a quiet background and one with our audio-stimulation. We compare the time-to-sleep in both sessions between two groups of subjects, e.g., fast and slow sleep onset groups. The p-value obtained from Wilcoxon Signed Rank Test is 1.22e-04 for slow onset group, which demonstrates that iSleep can significantly reduce the time-to-sleep for people with difficulty in falling sleep.

  17. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.

  18. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  19. Coexistence issues for a 2.4 GHz wireless audio streaming in presence of bluetooth paging and WLAN

    NASA Astrophysics Data System (ADS)

    Pfeiffer, F.; Rashwan, M.; Biebl, E.; Napholz, B.

    2015-11-01

    Nowadays, customers expect to integrate their mobile electronic devices (smartphones and laptops) in a vehicle to form a wireless network. Typically, IEEE 802.11 is used to provide a high-speed wireless local area network (WLAN) and Bluetooth is used for cable replacement applications in a wireless personal area network (PAN). In addition, Daimler uses KLEER as third wireless technology in the unlicensed (UL) 2.4 GHz-ISM-band to transmit full CD-quality digital audio. As Bluetooth, IEEE 802.11 and KLEER are operating in the same frequency band, it has to be ensured that all three technologies can be used simultaneously without interference. In this paper, we focus on the impact of Bluetooth and IEEE 802.11 as interferer in presence of a KLEER audio transmission.

  20. Co-streaming classes: a follow-up study in improving the user experience to better reach users.

    PubMed

    Hayes, Barrie E; Handler, Lara J; Main, Lindsey R

    2011-01-01

    Co-streaming classes have enabled library staff to extend open classes to distance education students and other users. Student evaluations showed that the model could be improved. Two areas required attention: audio problems experienced by online participants and staff teaching methods. Staff tested equipment and adjusted software configuration to improve user experience. Staff training increased familiarity with specialized teaching techniques and troubleshooting procedures. Technology testing and staff training were completed, and best practices were developed and applied. Class evaluations indicate improvements in classroom experience. Future plans include expanding co-streaming to more classes and on-going data collection, evaluation, and improvement of classes.

  1. MWAHCA: a multimedia wireless ad hoc cluster architecture.

    PubMed

    Diaz, Juan R; Lloret, Jaime; Jimenez, Jose M; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal.

  2. Quality models for audiovisual streaming

    NASA Astrophysics Data System (ADS)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  3. Video streaming into the mainstream.

    PubMed

    Garrison, W

    2001-12-01

    Changes in Internet technology are making possible the delivery of a richer mixture of media through data streaming. High-quality, dynamic content, such as video and audio, can be incorporated into Websites simply, flexibly and interactively. Technologies such as G3 mobile communication, ADSL, cable and satellites enable new ways of delivering medical services, information and learning. Systems such as Quicktime, Windows Media and Real Video provide reliable data streams as video-on-demand and users can tailor the experience to their own interests. The Learning Development Centre at the University of Portsmouth have used streaming technologies together with e-learning tools such as dynamic HTML, Flash, 3D objects and online assessment successfully to deliver on-line course content in economics and earth science. The Lifesign project--to develop, catalogue and stream health sciences media for teaching--is described and future medical applications are discussed.

  4. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    PubMed

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  5. Audio Watermark Embedding Technique Applying Auditory Stream Segregation: "G-encoder Mark" Able to Be Extracted by Mobile Phone

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.

  6. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  7. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  8. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  9. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  10. Aeronautical audio broadcasting via satellite

    NASA Technical Reports Server (NTRS)

    Tzeng, Forrest F.

    1993-01-01

    A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.

  11. Incentive Mechanisms for Peer-to-Peer Streaming

    ERIC Educational Resources Information Center

    Pai, Vinay

    2011-01-01

    The increasing popularity of high-bandwidth Internet connections has enabled new applications like the online delivery of high-quality audio and video content. Conventional server-client approaches place the entire burden of delivery on the content provider's server, making these services expensive to provide. A peer-to-peer approach allows end…

  12. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  13. Enhancing Online Education Using Collaboration Solutions

    ERIC Educational Resources Information Center

    Ge, Shuzhi Sam; Tok, Meng Yong

    2003-01-01

    With the advances in Internet technologies, online education is fast gaining ground as an extension to traditional education. Webcast allows lectures conducted on campus to be viewed by students located at remote sites by streaming the audio and video content over Internet Protocol (IP) networks. However when used alone, webcast does not provide…

  14. Next-Gen Video

    ERIC Educational Resources Information Center

    Arnn, Barbara

    2007-01-01

    This article discusses how schools across the US are using the latest videoconference and audio/video streaming technologies creatively to move to the next level of their very specific needs. At the Georgia Institute of Technology in Atlanta, the technology that is the backbone of the school's extensive distance learning program has to be…

  15. 78 FR 31800 - Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-24

    ...] Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video Description... should be the obligation of the apparatus manufacturer, under section 203, to ensure that the devices are... secondary audio stream on all equipment, including older equipment. In the absence of an industry solution...

  16. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. ATLAS Live: Collaborative Information Streams

    NASA Astrophysics Data System (ADS)

    Goldfarb, Steven; ATLAS Collaboration

    2011-12-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  18. Stochastic Packet Loss Model to Evaluate QoE Impairments

    NASA Astrophysics Data System (ADS)

    Hohlfeld, Oliver

    With provisioning of broadband access for mass market—even in wireless and mobile networks—multimedia content, especially real-time streaming of high-quality audio and video, is extensively viewed and exchanged over the Internet. Quality of Experience (QoE) aspects, describing the service quality perceived by the user, is a vital factor in ensuring customer satisfaction in today's communication networks. Frameworks for accessing quality degradations in streamed video currently are investigated as a complex multi-layered research topic, involving network traffic load, codec functions and measures of user perception of video quality.

  19. Innovations: clinical computing: an audio computer-assisted self-interviewing system for research and screening in public mental health settings.

    PubMed

    Bertollo, David N; Alexander, Mary Jane; Shinn, Marybeth; Aybar, Jalila B

    2007-06-01

    This column describes the nonproprietary software Talker, used to adapt screening instruments to audio computer-assisted self-interviewing (ACASI) systems for low-literacy populations and other populations. Talker supports ease of programming, multiple languages, on-site scoring, and the ability to update a central research database. Key features include highly readable text display, audio presentation of questions and audio prompting of answers, and optional touch screen input. The scripting language for adapting instruments is briefly described as well as two studies in which respondents provided positive feedback on its use.

  20. Online Distance Teaching of Undergraduate Finance: A Case for Musashi University and Konan University, Japan

    ERIC Educational Resources Information Center

    Kubota, Keiichi; Fujikawa, Kiyoshi

    2007-01-01

    We implemented a synchronous distance course entitled: Introductory Finance designed for undergraduate students. This course was held between two Japanese universities. Stable Internet connections allowing minimum delay and minimum interruptions of the audio-video streaming signals were used. Students were equipped with their own PCs with…

  1. Singingfish: Advancing the Art of Multimedia Search.

    ERIC Educational Resources Information Center

    Fritz, Mark

    2003-01-01

    Singingfish provides multimedia search services that enable Internet users to locate audio and video online. Over the last few years, the company has cataloged and indexed over 30 million streams and downloadable MP3s, with 150,000 to 250,000 more being added weekly. This article discusses a deal with Microsoft; the technology; improving the…

  2. 78 FR 77074 - Accessibility of User Interfaces, and Video Programming Guides and Menus; Accessible Emergency...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-20

    ... Apparatus Requirements for Emergency Information and Video Description: Implementation of the Twenty- First... of apparatus covered by the CVAA to provide access to the secondary audio stream used for audible... availability of accessible equipment and, if so, what those notification requirements should be. The Commission...

  3. MWAHCA: A Multimedia Wireless Ad Hoc Cluster Architecture

    PubMed Central

    Diaz, Juan R.; Jimenez, Jose M.; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal. PMID:24737996

  4. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  5. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  6. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  7. Multimedia content description framework

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Mohan, Rakesh (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor); Kim, Michelle Yoonk Yung (Inventor)

    2003-01-01

    A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.

  8. Design and Uses of an Audio/Video Streaming System for Students with Disabilities

    ERIC Educational Resources Information Center

    Hogan, Bryan J.

    2004-01-01

    Within most educational institutes there are a substantial number of students with varying physical and mental disabilities. These might range from difficulty in reading to difficulty in attending the institute. Whatever their disability, it places a barrier between them and their education. In the past few years there have been rapid and striking…

  9. The Evolution of Qualitative and Quantitative Research Classes when Delivered via Distance Education.

    ERIC Educational Resources Information Center

    Hecht, Jeffrey B.; Klass, Patricia H.

    This study examined whether new streamed Internet audio and video technology could be used for primary instruction in off-campus research classes. Several different off-campus student cohorts at Illinois State university enrolled in both a fall semester qualitative research methods class and a spring semester quantitative research methods class.…

  10. The Real Who, What, When, and Why of Journalism

    ERIC Educational Resources Information Center

    Huber-Humes, Sonya

    2007-01-01

    Journalism programs across the country have rolled out new curricula and courses emphasizing complex social issues, in-depth reporting, and "new media" such as online news sites with streaming audio and video. Journalism education has rightly taken its cue from media outlets that find themselves not relevant enough for a new generation of readers,…

  11. How we give personalised audio feedback after summative OSCEs.

    PubMed

    Harrison, Christopher J; Molyneux, Adrian J; Blackwell, Sara; Wass, Valerie J

    2015-04-01

    Students often receive little feedback after summative objective structured clinical examinations (OSCEs) to enable them to improve their performance. Electronic audio feedback has shown promise in other educational areas. We investigated the feasibility of electronic audio feedback in OSCEs. An electronic OSCE system was designed, comprising (1) an application for iPads allowing examiners to mark in the key consultation skill domains, provide "tick-box" feedback identifying strengths and difficulties, and record voice feedback; (2) a feedback website giving students the opportunity to view/listen in multiple ways to the feedback. Acceptability of the audio feedback was investigated, using focus groups with students and questionnaires with both examiners and students. 87 (95%) students accessed the examiners' audio comments; 83 (90%) found the comments useful and 63 (68%) reported changing the way they perform a skill as a result of the audio feedback. They valued its highly personalised, relevant nature and found it much more useful than written feedback. Eighty-nine per cent of examiners gave audio feedback to all students on their stations. Although many found the method easy, lack of time was a factor. Electronic audio feedback provides timely, personalised feedback to students after a summative OSCE provided enough time is allocated to the process.

  12. Multi-channel spatialization systems for audio signals

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor)

    1993-01-01

    Synthetic head related transfer functions (HRTF's) for imposing reprogrammable spatial cues to a plurality of audio input signals included, for example, in multiple narrow-band audio communications signals received simultaneously are generated and stored in interchangeable programmable read only memories (PROM's) which store both head related transfer function impulse response data and source positional information for a plurality of desired virtual source locations. The analog inputs of the audio signals are filtered and converted to digital signals from which synthetic head related transfer functions are generated in the form of linear phase finite impulse response filters. The outputs of the impulse response filters are subsequently reconverted to analog signals, filtered, mixed, and fed to a pair of headphones.

  13. Multi-channel spatialization system for audio signals

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor)

    1995-01-01

    Synthetic head related transfer functions (HRTF's) for imposing reprogramable spatial cues to a plurality of audio input signals included, for example, in multiple narrow-band audio communications signals received simultaneously are generated and stored in interchangeable programmable read only memories (PROM's) which store both head related transfer function impulse response data and source positional information for a plurality of desired virtual source locations. The analog inputs of the audio signals are filtered and converted to digital signals from which synthetic head related transfer functions are generated in the form of linear phase finite impulse response filters. The outputs of the impulse response filters are subsequently reconverted to analog signals, filtered, mixed and fed to a pair of headphones.

  14. Flexible server architecture for resource-optimal presentation of Internet multimedia streams to the client

    NASA Astrophysics Data System (ADS)

    Boenisch, Holger; Froitzheim, Konrad

    1999-12-01

    The transfer of live media streams such as video and audio over the Internet is subject to several problems, static and dynamic by nature. Important quality of service (QoS) parameters do not only differ between various receivers depending on their network access, service provider, and nationality, the QoS is also variable in time. Moreover the installed receiver base is heterogeneous with respect to operating system, browser or client software, and browser version. We present a new concept for serving live media streams. It is not longer based on the current one-size-fits all paradigm, where the server offers just one stream. Our compresslet system takes the opposite approach: it builds media streams `to order' and `just in time'. Every client subscribing to a media stream uses a servlet loaded into the media server to generate a tailored data stream for his resources and constraints. The server is designed such that commonly used components for media streams are computed once. The compresslets use these prefabricated components, code additional data if necessary, and construct the data stream based on the dynamic available QoS and other client constraints. A client-specific encoding leads to resource- optimal presentation that is especially useful for the presentation of complex multimedia documents on a variety of output devices.

  15. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  16. A New Species of Science Education: Harnessing the Power of Interactive Technology to Teach Laboratory Science

    ERIC Educational Resources Information Center

    Reddy, Christopher

    2014-01-01

    Interactive television is a type of distance education that uses streaming audio and video technology for real-time student-teacher interaction. Here, I discuss the design and logistics for developing a high school laboratory-based science course taught to students at a distance using interactive technologies. The goal is to share a successful…

  17. Effect of Audio Coaching on Correlation of Abdominal Displacement With Lung Tumor Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, Mitsuhiro; Narita, Yuichiro; Matsuo, Yukinori

    2009-10-01

    Purpose: To assess the effect of audio coaching on the time-dependent behavior of the correlation between abdominal motion and lung tumor motion and the corresponding lung tumor position mismatches. Methods and Materials: Six patients who had a lung tumor with a motion range >8 mm were enrolled in the present study. Breathing-synchronized fluoroscopy was performed initially without audio coaching, followed by fluoroscopy with recorded audio coaching for multiple days. Two different measurements, anteroposterior abdominal displacement using the real-time positioning management system and superoinferior (SI) lung tumor motion by X-ray fluoroscopy, were performed simultaneously. Their sequential images were recorded using onemore » display system. The lung tumor position was automatically detected with a template matching technique. The relationship between the abdominal and lung tumor motion was analyzed with and without audio coaching. Results: The mean SI tumor displacement was 10.4 mm without audio coaching and increased to 23.0 mm with audio coaching (p < .01). The correlation coefficients ranged from 0.89 to 0.97 with free breathing. Applying audio coaching, the correlation coefficients improved significantly (range, 0.93-0.99; p < .01), and the SI lung tumor position mismatches became larger in 75% of all sessions. Conclusion: Audio coaching served to increase the degree of correlation and make it more reproducible. In addition, the phase shifts between tumor motion and abdominal displacement were improved; however, all patients breathed more deeply, and the SI lung tumor position mismatches became slightly larger with audio coaching than without audio coaching.« less

  18. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  19. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    NASA Astrophysics Data System (ADS)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  20. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset

    PubMed Central

    Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738

  1. Tera-node Network Technology (Task 3) Scalable Personal Telecommunications

    DTIC Science & Technology

    2000-03-14

    Simulation results of this work may be found in http://north.east.isi.edu/spt/ audio.html. 6. Internet Research Task Force Reliable Multicast...Adaptation, 4. Multimedia Proxy Caching, 5. Experiments with the Rate Adaptation Protocol (RAP) 6. Providing leadership and innovation to the Internet ... Research Task Force (IRTF) Reliable Multicast Research Group (RMRG) 1. End-to-end Architecture for Quality-adaptive Streaming Applications over the

  2. Content-based intermedia synchronization

    NASA Astrophysics Data System (ADS)

    Oh, Dong-Young; Sampath-Kumar, Srihari; Rangan, P. Venkat

    1995-03-01

    Inter-media synchronization methods developed until now have been based on syntactic timestamping of video frames and audio samples. These methods are not fully appropriate for the synchronization of multimedia objects which may have to be accessed individually by their contents, e.g. content-base data retrieval. We propose a content-based multimedia synchronization scheme in which a media stream is viewed as hierarchial composition of smaller objects which are logically structured based on the contents, and the synchronization is achieved by deriving temporal relations among logical units of media object. content-based synchronization offers several advantages such as, elimination of the need for time stamping, freedom from limitations of jitter, synchronization of independently captured media objects in video editing, and compensation for inherent asynchronies in capture times of video and audio.

  3. The HomePlanet project: a HAVi multi-media network over POF

    NASA Astrophysics Data System (ADS)

    Roycroft, Brendan; Corbett, Brian; Kelleher, Carmel; Lambkin, John; Bareel, Baudouin; Goudeau, Jacques; Skiczuk, Peter

    2005-06-01

    This project has developed a low cost in-home network compatible with network standard IEEE1394b. We have developed all components of the network, from the red resonant cavity LEDs and VCSELs as light sources, the driver circuitry, plastic optical fibres for transmission, up to the network management software. We demonstrate plug-and-play operation of S100 and S200 (125 and 250Mbps) data streams using 650nm RCLEDs, and S400 (500 Mbps) data streams using VCSELs. The network software incorporates Home Audio Video interoperability (HAVi), which allows any HAVi device to be hot-plugged into the network and be instantly recognised and controllable over the network.

  4. An Analysis of Students' Perceptions of the Value and Efficacy of Instructors' Auditory and Text-Based Feedback Modalities across Multiple Conceptual Levels

    ERIC Educational Resources Information Center

    Ice, Phil; Swan, Karen; Diaz, Sebastian; Kupczynski, Lori; Swan-Dagen, Allison

    2010-01-01

    This article used work from the writing assessment literature to develop a framework for assessing the impact and perceived value of written, audio, and combined written and audio feedback strategies across four global and 22 discrete dimensions of feedback. Using a quasi-experimental research design, students at three U.S. universities were…

  5. Language Teaching with the Help of Multiple Methods. Collection d'"Etudes linguistiques," No. 21.

    ERIC Educational Resources Information Center

    Nivette, Jos, Ed.

    This book presents articles on language teaching media. Among the titles are: (1) "Il Foreign Language Teaching e l'impiego degli audio-visivi" (Foreign Language Teaching and the Use of Audio Visual Methods) by D'Agostino, (2) "Le role et la nature de l'image dans l'enseignement programme de l'anglais, langue seconde" (The Role and Nature of the…

  6. StreamWorks: the live and on-demand audio/video server and its applications in medical information systems

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Gordon, Howard; Palisson, Patrice M.; Prost, Remy; Goutte, Robert

    1996-05-01

    Facing a world undergoing fundamental and rapid change, healthcare organizations are seeking ways to increase innovation, quality, productivity, and patient value, keys to more effective care. Individual clinics acting alone can respond in only a limited way, so re- engineering the process key which services are delivered demands real-time collaborative technology that provides immediate information sharing, improving the management and coordination of information in cross-functional teams. StreamWorks is a development stage architecture that uses a distribution technique to deliver an advanced information management system for telemedicine. The challenge of StreamWorks in telemedicine is to enable equity of the quality of Health Care of Telecommunications and Information Technology also to patients in less favored regions, like India or China, where the quality of medical care varies greatly by region, but where there are some very current communications facilities.

  7. Implementing Audio Digital Feedback Loop Using the National Instruments RIO System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, G.; Byrd, J. M.

    2006-11-20

    Development of system for high precision RF distribution and laser synchronization at Berkeley Lab has been ongoing for several years. Successful operation of these systems requires multiple audio bandwidth feedback loops running at relatively high gains. Stable operation of the feedback loops requires careful design of the feedback transfer function. To allow for flexible and compact implementation, we have developed digital feedback loops on the National Instruments Reconfigurable Input/Output (RIO) platform. This platform uses an FPGA and multiple I/Os that can provide eight parallel channels running different filters. We present the design and preliminary experimental results of this system.

  8. Development of the ISS EMU Dashboard Software

    NASA Technical Reports Server (NTRS)

    Bernard, Craig; Hill, Terry R.

    2011-01-01

    The EMU (Extra-Vehicular Mobility Unit) Dashboard was developed at NASA s Johnson Space Center to aid in real-time mission support for the ISS (International Space Station) and Shuttle EMU space suit by time synchronizing down-linked video, space suit data and audio from the mission control audio loops. Once the input streams are synchronized and recorded, the data can be replayed almost instantly and has proven invaluable in understanding in-flight hardware anomalies and playing back information conveyed by the crew to missions control and the back room support. This paper will walk through the development from an engineer s idea brought to life by an intern to real time mission support and how this tool is evolving today and its challenges to support EVAs (Extra-Vehicular Activities) and human exploration in the 21st century.

  9. On application of kernel PCA for generating stimulus features for fMRI during continuous music listening.

    PubMed

    Tsatsishvili, Valeri; Burunat, Iballa; Cong, Fengyu; Toiviainen, Petri; Alluri, Vinoo; Ristaniemi, Tapani

    2018-06-01

    There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    NASA Astrophysics Data System (ADS)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  11. Enhancing Battlemind: Preventing PTSD by Coping with Intrusive Thoughts

    DTIC Science & Technology

    2015-09-01

    Characteristics of Participant-Soldiers Demographics Demographic Characteristics N = 1,524 Sex Male 90.6% Female 9.4...consultants • Workshops also included time for live practice, including audio and video taping of trainers’ delivery of modules • One-on-one in person...additional audio/ video taping • Culminated with a certification test in which trainer was rated on multiple domains and content areas by PI, PC, other

  12. Hamming and Accumulator Codes Concatenated with MPSK or QAM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel

    2009-01-01

    In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.

  13. The use of ambient audio to increase safety and immersion in location-based games

    NASA Astrophysics Data System (ADS)

    Kurczak, John Jason

    The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a visual interface, while players performed signifi- cantly better at the game tasks when using the visual interface. This makes ambient audio a legitimate alternative to visual interfaces for mobile users when safety and immersion are a priority.

  14. Designing sound and visual components for enhancement of urban soundscapes.

    PubMed

    Hong, Joo Young; Jeon, Jin Yong

    2013-09-01

    The aim of this study is to investigate the effect of audio-visual components on environmental quality to improve soundscape. Natural sounds with road traffic noise and visual components in urban streets were evaluated through laboratory experiments. Waterfall and stream water sounds, as well as bird sounds, were selected to enhance the soundscape. Sixteen photomontages of a streetscape were constructed in combination with two types of water features and three types of vegetation which were chosen as positive visual components. The experiments consisted of audio-only, visual-only, and audio-visual conditions. The preferences and environmental qualities of the stimuli were evaluated by a numerical scale and 12 pairs of adjectives, respectively. The results showed that bird sounds were the most preferred among the natural sounds, while the sound of falling water was found to degrade the soundscape quality when the road traffic noise level was high. The visual effects of vegetation on aesthetic preference were significant, but those of water features relatively small. It was revealed that the perceptual dimensions of the environment were different from the noise levels. Particularly, the acoustic comfort factor related to soundscape quality considerably influenced preference for the overall environment at a higher level of road traffic noise.

  15. Identification and annotation of erotic film based on content analysis

    NASA Astrophysics Data System (ADS)

    Wang, Donghui; Zhu, Miaoliang; Yuan, Xin; Qian, Hui

    2005-02-01

    The paper brings forward a new method for identifying and annotating erotic films based on content analysis. First, the film is decomposed to video and audio stream. Then, the video stream is segmented into shots and key frames are extracted from each shot. We filter the shots that include potential erotic content by finding the nude human body in key frames. A Gaussian model in YCbCr color space for detecting skin region is presented. An external polygon that covered the skin regions is used for the approximation of the human body. Last, we give the degree of the nudity by calculating the ratio of skin area to whole body area with weighted parameters. The result of the experiment shows the effectiveness of our method.

  16. Automated Cough Assessment on a Mobile Platform

    PubMed Central

    2014-01-01

    The development of an Automated System for Asthma Monitoring (ADAM) is described. This consists of a consumer electronics mobile platform running a custom application. The application acquires an audio signal from an external user-worn microphone connected to the device analog-to-digital converter (microphone input). This signal is processed to determine the presence or absence of cough sounds. Symptom tallies and raw audio waveforms are recorded and made easily accessible for later review by a healthcare provider. The symptom detection algorithm is based upon standard speech recognition and machine learning paradigms and consists of an audio feature extraction step followed by a Hidden Markov Model based Viterbi decoder that has been trained on a large database of audio examples from a variety of subjects. Multiple Hidden Markov Model topologies and orders are studied. Performance of the recognizer is presented in terms of the sensitivity and the rate of false alarm as determined in a cross-validation test. PMID:25506590

  17. JXTA: A Technology Facilitating Mobile P2P Health Management System

    PubMed Central

    Rajkumar, Rajasekaran; Nallani Chackravatula Sriman, Narayana Iyengar

    2012-01-01

    Objectives Mobile JXTA (Juxtapose) gaining momentum and has attracted the interest of doctors and patients through P2P service that transmits messages. Audio and video can also be transmitted through JXTA. The use of mobile streaming mechanism with the support of mobile hospital management and healthcare system would enable better interaction between doctors, nurses, and the hospital. Experimental results demonstrate good performance in comparison with conventional systems. This study evaluates P2P JXTA/JXME (JXTA functionality to MIDP devices.) which facilitates peer-to-peer application+ using mobile-constraint devices. Also a proven learning algorithm was used to automatically send and process sorted patient data to nurses. Methods From December 2010 to December 2011, a total of 500 patients were referred to our hospital due to minor health problems and were monitored. We selected all of the peer groups and the control server, which controlled the BMO (Block Medical Officer) peer groups and BMO through the doctor peer groups, and prescriptions were delivered to the patient’s mobile phones through the JXTA/ JXME network. Results All 500 patients were registered in the JXTA network. Among these, 300 patient histories were referred to the record peer group by the doctors, 100 patients were referred to the external doctor peer group, and 100 patients were registered as new users in the JXTA/JXME network. Conclusion This system was developed for mobile streaming applications and was designed to support the mobile health management system using JXTA/ JXME. The simulated results show that this system can carry out streaming audio and video applications. Controlling and monitoring by the doctor peer group makes the system more flexible and structured. Enhanced studies are needed to improve knowledge mining and cloud-based M health management technology in comparison with the traditional system. PMID:24159509

  18. Wireless augmented reality communication system

    NASA Technical Reports Server (NTRS)

    Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)

    2006-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  19. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)

    2014-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  20. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)

    2016-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  1. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode.

  2. Frequency shifting approach towards textual transcription of heartbeat sounds.

    PubMed

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  3. The effect of audio tours on learning and social interaction: An evaluation at Carlsbad Caverns National Park

    NASA Astrophysics Data System (ADS)

    Novey, Levi T.; Hall, Troy E.

    2007-03-01

    Auditory forms of nonpersonal communication have rarely been evaluated in informal settings like parks and museums. This study evaluated the effect of an interpretive audio tour on visitor knowledge and social behavior at Carlsbad Caverns National Park. A cross-sectional pretest/posttest quasi-experimental design compared the responses of audio tour users (n = 123) and nonusers (n = 131) on several knowledge questions. Observations (n = 700) conducted at seven sites within the caverns documented sign reading, time spent listening to the audio, within group conversation, and other social behaviors for a different sample of visitors. Pretested tour users and nonusers did not differ in visitor characteristics, knowledge, or attitude variables, suggesting the two populations were similar. On a 12-item knowledge quiz, tour users' scores increased from 5.7 to 10.3, and nonusers' scores increased from 6.2 to 8.4. Most visitors were able to identify some of the park's major messages when presented with a multiple-choice question, but more audio users than nonusers identified resource preservation as a primary message in an open-ended question. Based on observations, audio tour users and nonusers did not differ substantially in their interactions with other members of their group or in their reading of interpretive signs in the cave. Audio tour users had positive reactions to the tour, and these reactions, coupled with the positive learning outcomes and negligible effects on social interaction, suggest that audio tours can be an effective communication medium in informal educational settings.

  4. Evaluating the Use of Auditory Systems to Improve Performance in Combat Search and Rescue

    DTIC Science & Technology

    2012-03-01

    take advantage of human binaural hearing to present spatial information through auditory stimuli as it would occur in the real world. This allows the...multiple operators unambiguously and in a short amount of time. Spatial audio basics Spatial audio works with human binaural hearing to generate... binaural recordings “sound better” when heard in the same location where the recordings were made. While this appears to be related to the acoustic

  5. Attending to Multiple Visual Streams: Interactions between Location-Based and Category-Based Attentional Selection

    ERIC Educational Resources Information Center

    Fagioli, Sabrina; Macaluso, Emiliano

    2009-01-01

    Behavioral studies indicate that subjects are able to divide attention between multiple streams of information at different locations. However, it is still unclear to what extent the observed costs reflect processes specifically associated with spatial attention, versus more general interference due the concurrent monitoring of multiple streams of…

  6. Fiber-channel audio video standard for military and commercial aircraft product lines

    NASA Astrophysics Data System (ADS)

    Keller, Jack E.

    2002-08-01

    Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.

  7. Multiple stress response of lowland stream benthic macroinvertebrates depends on habitat type.

    PubMed

    Graeber, Daniel; Jensen, Tinna M; Rasmussen, Jes J; Riis, Tenna; Wiberg-Larsen, Peter; Baattrup-Pedersen, Annette

    2017-12-01

    Worldwide, lowland stream ecosystems are exposed to multiple anthropogenic stress due to the combination of water scarcity, eutrophication, and fine sedimentation. The understanding of the effects of such multiple stress on stream benthic macroinvertebrates has been growing in recent years. However, the interdependence of multiple stress and stream habitat characteristics has received little attention, although single stressor studies indicate that habitat characteristics may be decisive in shaping the macroinvertebrate response. We conducted an experiment in large outdoor flumes to assess the effects of low flow, fine sedimentation, and nutrient enrichment on the structure of the benthic macroinvertebrate community in riffle and run habitats of lowland streams. For most taxa, we found a negative effect of low flow on macroinvertebrate abundance in the riffle habitat, an effect which was mitigated by fine sedimentation for overall community composition and the dominant shredder species (Gammarus pulex) and by nutrient enrichment for the dominant grazer species (Baetis rhodani). In contrast, fine sediment in combination with low flow rapidly affected macroinvertebrate composition in the run habitat, with decreasing abundances of many species. We conclude that the effects of typical multiple stressor scenarios on lowland stream benthic macroinvertebrates are highly dependent on habitat conditions and that high habitat diversity needs to be given priority by stream managers to maximize the resilience of stream macroinvertebrate communities to multiple stress. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Statistical data mining of streaming motion data for fall detection in assistive environments.

    PubMed

    Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P

    2011-01-01

    The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.

  9. Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132

  10. Selective attention modulates the direction of audio-visual temporal recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  11. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  12. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  13. Reconstruction of audio waveforms from spike trains of artificial cochlea models

    PubMed Central

    Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii

    2015-01-01

    Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113

  14. PROTAX-Sound: A probabilistic framework for automated animal sound identification

    PubMed Central

    Somervuo, Panu; Ovaskainen, Otso

    2017-01-01

    Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities. PMID:28863178

  15. PROTAX-Sound: A probabilistic framework for automated animal sound identification.

    PubMed

    de Camargo, Ulisses Moliterno; Somervuo, Panu; Ovaskainen, Otso

    2017-01-01

    Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities.

  16. An architecture of entropy decoder, inverse quantiser and predictor for multi-standard video decoding

    NASA Astrophysics Data System (ADS)

    Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun

    2014-07-01

    A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.

  17. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence.

    PubMed

    Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.

  18. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    PubMed Central

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430

  19. Distribution of model uncertainty across multiple data streams

    NASA Astrophysics Data System (ADS)

    Wutzler, Thomas

    2014-05-01

    When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.

  20. A hybrid technique for speech segregation and classification using a sophisticated deep neural network

    PubMed Central

    Nawaz, Tabassam; Mehmood, Zahid; Rashid, Muhammad; Habib, Hafiz Adnan

    2018-01-01

    Recent research on speech segregation and music fingerprinting has led to improvements in speech segregation and music identification algorithms. Speech and music segregation generally involves the identification of music followed by speech segregation. However, music segregation becomes a challenging task in the presence of noise. This paper proposes a novel method of speech segregation for unlabelled stationary noisy audio signals using the deep belief network (DBN) model. The proposed method successfully segregates a music signal from noisy audio streams. A recurrent neural network (RNN)-based hidden layer segregation model is applied to remove stationary noise. Dictionary-based fisher algorithms are employed for speech classification. The proposed method is tested on three datasets (TIMIT, MIR-1K, and MusicBrainz), and the results indicate the robustness of proposed method for speech segregation. The qualitative and quantitative analysis carried out on three datasets demonstrate the efficiency of the proposed method compared to the state-of-the-art speech segregation and classification-based methods. PMID:29558485

  1. Experiments in MPEG-4 content authoring, browsing, and streaming

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Schmidt, Robert L.; Basso, Andrea; Civanlar, Mehmet R.

    2000-12-01

    In this paper, within the context of the MPEG-4 standard we report on preliminary experiments in three areas -- authoring of MPEG-4 content, a player/browser for MPEG-4 content, and streaming of MPEG-4 content. MPEG-4 is a new standard for coding of audiovisual objects; the core of MPEG-4 standard is complete while amendments are in various stages of completion. MPEG-4 addresses compression of audio and visual objects, their integration by scene description, and interactivity of users with such objects. MPEG-4 scene description is based on VRML like language for 3D scenes, extended to 2D scenes, and supports integration of 2D and 3D scenes. This scene description language is called BIFS. First, we introduce the basic concepts behind BIFS and then show with an example, textual authoring of different components needed to describe an audiovisual scene in BIFS; the textual BIFS is then saved as compressed binary file/s for storage or transmission. Then, we discuss a high level design of an MPEG-4 player/browser that uses the main components from authoring such as encoded BIFS stream, media files it refers to, and multiplexed object descriptor stream to play an MPEG-4 scene. We also discuss our extensions to such a player/browser. Finally, we present our work in streaming of MPEG-4 -- the payload format, modification to client MPEG-4 player/browser, server-side infrastructure and example content used in our MPEG-4 streaming experiments.

  2. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.

  3. SOUND SURVEY DESIGNS CAN FACILITATE INTEGRATING STREAM MONITORING DATA ACROSS MULTIPLE PROGRAMS

    EPA Science Inventory

    Multiple agencies in the Pacific Northwest monitor the condition of stream networks or their watersheds. Some agencies use a stream "network" perspective to report on the fraction or length of the network that either meets or violates particular criteria. Other agencies use a "wa...

  4. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  5. Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues

    NASA Astrophysics Data System (ADS)

    Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.

    2003-12-01

    We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.

  6. New radio meteor detecting and logging software

    NASA Astrophysics Data System (ADS)

    Kaufmann, Wolfgang

    2017-08-01

    A new piece of software ``Meteor Logger'' for the radio observation of meteors is described. It analyses an incoming audio stream in the frequency domain to detect a radio meteor signal on the basis of its signature, instead of applying an amplitude threshold. For that reason the distribution of the three frequencies with the highest spectral power are considered over the time (3f method). An auto notch algorithm is developed to prevent the radio meteor signal detection from being jammed by a present interference line. The results of an exemplary logging session are discussed.

  7. Detection of Metallic and Electronic Radar Targets by Acoustic Modulation of Electromagnetic Waves

    DTIC Science & Technology

    2017-07-01

    reradiated wave is captured by the radar’s receive antenna. The presence of measurable EM energy at any discrete multiple of the audio frequency away...the radar receiver (Rx). The presence of measurable EM energy at any discrete multiple of faudio away from the original RF carrier fRF (i.e., at any n

  8. Acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach

    Treesearch

    F. Briggs; B. Lakshminarayanan; L. Neal; X.Z. Fern; R. Raich; S.F. Hadley; A.S. Hadley; M.G. Betts

    2012-01-01

    Although field-collected recordings typically contain multiple simultaneously vocalizing birds of different species, acoustic species classification in this setting has received little study so far. This work formulates the problem of classifying the set of species present in an audio recording using the multi-instance multi-label (MIML) framework for machine learning...

  9. Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets

    NASA Astrophysics Data System (ADS)

    Bazzica, A.; van Gemert, J. C.; Liem, C. C. S.; Hanjalic, A.

    2017-05-01

    Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \\eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.

  10. A Multiple Streams analysis of the decisions to fund gender-neutral HPV vaccination in Canada.

    PubMed

    Shapiro, Gilla K; Guichon, Juliet; Prue, Gillian; Perez, Samara; Rosberger, Zeev

    2017-07-01

    In Canada, the human papillomavirus (HPV) vaccine is licensed and recommended for females and males. Although all Canadian jurisdictions fund school-based HPV vaccine programs for girls, only six jurisdictions fund school-based HPV vaccination for boys. The research aimed to analyze the factors that underpin government decisions to fund HPV vaccine for boys using a theoretical policy model, Kingdon's Multiple Streams framework. This approach assesses policy development by examining three concurrent, but independent, streams that guide analysis: Problem Stream, Policy Stream, and Politics Stream. Analysis from the Problem Stream highlights that males are affected by HPV-related diseases and are involved in transmitting HPV infection to their sexual partners. Policy Stream analysis makes clear that while the inclusion of males in HPV vaccine programs is suitable, equitable, and acceptable; there is debate regarding cost-effectiveness. Politics Stream analysis identifies the perspectives of six different stakeholder groups and highlights the contribution of government officials at the provincial and territorial level. Kingdon's Multiple Streams framework helps clarify the opportunities and barriers for HPV vaccine policy change. This analysis identified that the interpretation of cost-effectiveness models and advocacy of stakeholders such as citizen-advocates and HPV-affected politicians have been particularly important in galvanizing policy change. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  12. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  13. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  14. Multi-stream face recognition on dedicated mobile devices for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2006-09-01

    Automatic face recognition is a useful tool in the fight against crime and terrorism. Technological advance in mobile communication systems and multi-application mobile devices enable the creation of hybrid platforms for active and passive surveillance. A dedicated mobile device that incorporates audio-visual sensors would not only complement existing networks of fixed surveillance devices (e.g. CCTV) but could also provide wide geographical coverage in almost any situation and anywhere. Such a device can hold a small portion of a law-enforcing agency biometric database that consist of audio and/or visual data of a number of suspects/wanted or missing persons who are expected to be in a local geographical area. This will assist law-enforcing officers on the ground in identifying persons whose biometric templates are downloaded onto their devices. Biometric data on the device can be regularly updated which will reduce the number of faces an officer has to remember. Such a dedicated device would act as an active/passive mobile surveillance unit that incorporate automatic identification. This paper is concerned with the feasibility of using wavelet-based face recognition schemes on such devices. The proposed schemes extend our recently developed face verification scheme for implementation on a currently available PDA. In particular we will investigate the use of a combination of wavelet frequency channels for multi-stream face recognition. We shall present experimental results on the performance of our proposed schemes for a number of publicly available face databases including a new AV database of videos recorded on a PDA.

  15. Transfer Learning for Improved Audio-Based Human Activity Recognition.

    PubMed

    Ntalampiras, Stavros; Potamitis, Ilyas

    2018-06-25

    Human activities are accompanied by characteristic sound events, the processing of which might provide valuable information for automated human activity recognition. This paper presents a novel approach addressing the case where one or more human activities are associated with limited audio data, resulting in a potentially highly imbalanced dataset. Data augmentation is based on transfer learning; more specifically, the proposed method: (a) identifies the classes which are statistically close to the ones associated with limited data; (b) learns a multiple input, multiple output transformation; and (c) transforms the data of the closest classes so that it can be used for modeling the ones associated with limited data. Furthermore, the proposed framework includes a feature set extracted out of signal representations of diverse domains, i.e., temporal, spectral, and wavelet. Extensive experiments demonstrate the relevance of the proposed data augmentation approach under a variety of generative recognition schemes.

  16. Stream network and stream segment temperature models software

    USGS Publications Warehouse

    Bartholow, John

    2010-01-01

    This set of programs simulates steady-state stream temperatures throughout a dendritic stream network handling multiple time periods per year. The software requires a math co-processor and 384K RAM. Also included is a program (SSTEMP) designed to predict the steady state stream temperature within a single stream segment for a single time period.

  17. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  18. A PC-based telemetry system for acquiring and reducing data from multiple PCM streams

    NASA Astrophysics Data System (ADS)

    Simms, D. A.; Butterfield, C. P.

    1991-07-01

    The Solar Energy Research Institute's (SERI) Wind Research Program is using Pulse Code Modulation (PCM) Telemetry Data-Acquisition Systems to study horizontal-axis wind turbines. Many PCM systems are combined for use in test installations that require accurate measurements from a variety of different locations. SERI has found them ideal for data-acquisition from multiple wind turbines and meteorological towers in wind parks. A major problem has been in providing the capability to quickly combine and examine incoming data from multiple PCM sources in the field. To solve this problem, SERI has developed a low-cost PC-based PCM Telemetry Data-Reduction System (PC-PCM System) to facilitate quick, in-the-field multiple-channel data analysis. The PC-PCM System consists of two basic components. First, PC-compatible hardware boards are used to decode and combine multiple PCM data streams. Up to four hardware boards can be installed in a single PC, which provides the capability to combine data from four PCM streams directly to PC disk or memory. Each stream can have up to 62 data channels. Second, a software package written for use under DOS was developed to simplify data-acquisition control and management. The software, called the Quick-Look Data Management Program, provides a quick, easy-to-use interface between the PC and multiple PCM data streams. The Quick-Look Data Management Program is a comprehensive menu-driven package used to organize, acquire, process, and display information from incoming PCM data streams. The paper describes both hardware and software aspects of the SERI PC-PCM system, concentrating on features that make it useful in an experiment test environment to quickly examine and verify incoming data from multiple PCM streams. Also discussed are problems and techniques associated with PC-based telemetry data-acquisition, processing, and real-time display.

  19. Through the Looking Glass: The Multiple Layers of Multimedia.

    ERIC Educational Resources Information Center

    D'Ignazio, Fred

    1990-01-01

    Describes possible future uses of multimedia computers for instructional applications. Highlights include databases; publishing; telecommunications; computers and videocassette recorders (VCRs); audio and video digitizing; video overlay, or genlock; still-image video; videodiscs and CD-ROM; and hypermedia. (LRW)

  20. Telearch - Integrated visual simulation environment for collaborative virtual archaeology.

    NASA Astrophysics Data System (ADS)

    Kurillo, Gregorij; Forte, Maurizio

    Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.

  1. The role of laryngoscopy in the diagnosis of spasmodic dysphonia.

    PubMed

    Daraei, Pedram; Villari, Craig R; Rubin, Adam D; Hillel, Alexander T; Hapner, Edie R; Klein, Adam M; Johns, Michael M

    2014-03-01

    Spasmodic dysphonia (SD) can be difficult to diagnose, and patients often see multiple physicians for many years before diagnosis. Improving the speed of diagnosis for individuals with SD may decrease the time to treatment and improve patient quality of life more quickly. To assess whether the diagnosis of SD can be accurately predicted through auditory cues alone without the assistance of visual cues offered by laryngoscopic examination. Single-masked, case-control study at a specialized referral center that included patients who underwent laryngoscopic examination as part of a multidisciplinary workup for dysphonia. Twenty-two patients were selected in total: 10 with SD, 5 with vocal tremor, and 7 controls without SD or vocal tremor. The laryngoscopic examination was recorded, deidentified, and edited to make 3 media clips for each patient: video alone, audio alone, and combined video and audio. These clips were randomized and presented to 3 fellowship-trained laryngologist raters (A.D.R., A.T.H., and A.M.K.), who established the most probable diagnosis for each clip. Intrarater and interrater reliability were evaluated using repeat clips incorporated in the presentations. We measured diagnostic accuracy for video-only, audio-only, and combined multimedia clips. These measures were established before data collection. Data analysis was accomplished with analysis of variance and Tukey honestly significant differences. Of patients with SD, diagnostic accuracy was 10%, 73%, and 73% for video-only, audio-only, and combined, respectively (P < .001, df = 2). Of patients with vocal tremor, diagnostic accuracy was 93%, 73%, and 100% for video-only, audio-only, and combined, respectively (P = .05, df = 2). Of the controls, diagnostic accuracy was 81%, 19%, and 62% for video-only, audio-only, and combined, respectively (P < .001, df = 2). The diagnosis of SD during examination is based primarily on auditory cues. Viewing combined audio and video clips afforded no change in diagnostic accuracy compared with audio alone. Laryngoscopy serves an important role in the diagnosis of SD by excluding other pathologic causes and identifying vocal tremor.

  2. Content-based audio authentication using a hierarchical patchwork watermark embedding

    NASA Astrophysics Data System (ADS)

    Gulbis, Michael; Müller, Erika

    2010-05-01

    Content-based audio authentication watermarking techniques extract perceptual relevant audio features, which are robustly embedded into the audio file to protect. Manipulations of the audio file are detected on the basis of changes between the original embedded feature information and the anew extracted features during verification. The main challenges of content-based watermarking are on the one hand the identification of a suitable audio feature to distinguish between content preserving and malicious manipulations. On the other hand the development of a watermark, which is robust against content preserving modifications and able to carry the whole authentication information. The payload requirements are significantly higher compared to transaction watermarking or copyright protection. Finally, the watermark embedding should not influence the feature extraction to avoid false alarms. Current systems still lack a sufficient alignment of watermarking algorithm and feature extraction. In previous work we developed a content-based audio authentication watermarking approach. The feature is based on changes in DCT domain over time. A patchwork algorithm based watermark was used to embed multiple one bit watermarks. The embedding process uses the feature domain without inflicting distortions to the feature. The watermark payload is limited by the feature extraction, more precisely the critical bands. The payload is inverse proportional to segment duration of the audio file segmentation. Transparency behavior was analyzed in dependence of segment size and thus the watermark payload. At a segment duration of about 20 ms the transparency shows an optimum (measured in units of Objective Difference Grade). Transparency and/or robustness are fast decreased for working points beyond this area. Therefore, these working points are unsuitable to gain further payload, needed for the embedding of the whole authentication information. In this paper we present a hierarchical extension of the watermark method to overcome the limitations given by the feature extraction. The approach is a recursive application of the patchwork algorithm onto its own patches, with a modified patch selection to ensure a better signal to noise ratio for the watermark embedding. The robustness evaluation was done by compression (mp3, ogg, aac), normalization, and several attacks of the stirmark benchmark for audio suite. Compared on the base of same payload and transparency the hierarchical approach shows improved robustness.

  3. StreamThermal: A software package for calculating thermal metrics from stream temperature data

    USGS Publications Warehouse

    Tsang, Yin-Phan; Infante, Dana M.; Stewart, Jana S.; Wang, Lizhu; Tingly, Ralph; Thornbrugh, Darren; Cooper, Arthur; Wesley, Daniel

    2016-01-01

    Improving quality and better availability of continuous stream temperature data allows natural resource managers, particularly in fisheries, to understand associations between different characteristics of stream thermal regimes and stream fishes. However, there is no convenient tool to efficiently characterize multiple metrics reflecting stream thermal regimes with the increasing amount of data. This article describes a software program packaged as a library in R to facilitate this process. With this freely-available package, users will be able to quickly summarize metrics that describe five categories of stream thermal regimes: magnitude, variability, frequency, timing, and rate of change. The installation and usage instruction of this package, the definition of calculated thermal metrics, as well as the output format from the package are described, along with an application showing the utility for multiple metrics. We believe this package can be widely utilized by interested stakeholders and greatly assist more studies in fisheries.

  4. Discrete Event Simulation of Distributed Team Communication

    DTIC Science & Technology

    2012-03-22

    performs, and auditory information that is provided through multiple audio devices with speech response. This paper extends previous discrete event workload...2008, pg. 1) notes that “Architecture modeling furnishes abstrac- tions for use in managing complexities, allowing engineers to visualise the proposed

  5. Evaluation of MRI acquisition workflow with lean six sigma method: case study of liver and knee examinations.

    PubMed

    Roth, Christopher J; Boll, Daniel T; Wall, Lisa K; Merkle, Elmar M

    2010-08-01

    The purpose of this investigation was to assess workflow for medical imaging studies, specifically comparing liver and knee MRI examinations by use of the Lean Six Sigma methodologic framework. The hypothesis tested was that the Lean Six Sigma framework can be used to quantify MRI workflow and to identify sources of inefficiency to target for sequence and protocol improvement. Audio-video interleave streams representing individual acquisitions were obtained with graphic user interface screen capture software in the examinations of 10 outpatients undergoing MRI of the liver and 10 outpatients undergoing MRI of the knee. With Lean Six Sigma methods, the audio-video streams were dissected into value-added time (true image data acquisition periods), business value-added time (time spent that provides no direct patient benefit but is requisite in the current system), and non-value-added time (scanner inactivity while awaiting manual input). For overall MRI table time, value-added time was 43.5% (range, 39.7-48.3%) of the time for liver examinations and 89.9% (range, 87.4-93.6%) for knee examinations. Business value-added time was 16.3% of the table time for the liver and 4.3% of the table time for the knee examinations. Non-value-added time was 40.2% of the overall table time for the liver and 5.8% for the knee examinations. Liver MRI examinations consume statistically significantly more non-value-added and business value-added times than do knee examinations, primarily because of respiratory command management and contrast administration. Workflow analyses and accepted inefficiency reduction frameworks can be applied with use of a graphic user interface screen capture program.

  6. Determination of the duty cycle of WLAN for realistic radio frequency electromagnetic field exposure assessment.

    PubMed

    Joseph, Wout; Pareit, Daan; Vermeeren, Günter; Naudts, Dries; Verloock, Leen; Martens, Luc; Moerman, Ingrid

    2013-01-01

    Wireless Local Area Networks (WLANs) are commonly deployed in various environments. The WLAN data packets are not transmitted continuously but often worst-case exposure of WLAN is assessed, assuming 100% activity and leading to huge overestimations. Actual duty cycles of WLAN are thus of importance for time-averaging of exposure when checking compliance with international guidelines on limiting adverse health effects. In this paper, duty cycles of WLAN using Wi-Fi technology are determined for exposure assessment on large scale at 179 locations for different environments and activities (file transfer, video streaming, audio, surfing on the internet, etc.). The median duty cycle equals 1.4% and the 95th percentile is 10.4% (standard deviation SD = 6.4%). Largest duty cycles are observed in urban and industrial environments. For actual applications, the theoretical upper limit for the WLAN duty cycle is 69.8% and 94.7% for maximum and minimum physical data rate, respectively. For lower data rates, higher duty cycles will occur. Although counterintuitive at first sight, poor WLAN connections result in higher possible exposures. File transfer at maximum data rate results in median duty cycles of 47.6% (SD = 16%), while it results in median values of 91.5% (SD = 18%) at minimum data rate. Surfing and audio streaming are less intensively using the wireless medium and therefore have median duty cycles lower than 3.2% (SD = 0.5-7.5%). For a specific example, overestimations up to a factor 8 for electric fields occur, when considering 100% activity compared to realistic duty cycles. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Characterizing multiple timescales of stream and storage zone interaction that affect solute fate and transport in streams

    USGS Publications Warehouse

    Choi, Jungyill; Harvey, Judson W.; Conklin, Martha H.

    2000-01-01

    The fate of contaminants in streams and rivers is affected by exchange and biogeochemical transformation in slowly moving or stagnant flow zones that interact with rapid flow in the main channel. In a typical stream, there are multiple types of slowly moving flow zones in which exchange and transformation occur, such as stagnant or recirculating surface water as well as subsurface hyporheic zones. However, most investigators use transport models with just a single storage zone in their modeling studies, which assumes that the effects of multiple storage zones can be lumped together. Our study addressed the following question: Can a single‐storage zone model reliably characterize the effects of physical retention and biogeochemical reactions in multiple storage zones? We extended an existing stream transport model with a single storage zone to include a second storage zone. With the extended model we generated 500 data sets representing transport of nonreactive and reactive solutes in stream systems that have two different types of storage zones with variable hydrologic conditions. The one storage zone model was tested by optimizing the lumped storage parameters to achieve a best fit for each of the generated data sets. Multiple storage processes were categorized as possessing I, additive; II, competitive; or III, dominant storage zone characteristics. The classification was based on the goodness of fit of generated data sets, the degree of similarity in mean retention time of the two storage zones, and the relative distributions of exchange flux and storage capacity between the two storage zones. For most cases (>90%) the one storage zone model described either the effect of the sum of multiple storage processes (category I) or the dominant storage process (category III). Failure of the one storage zone model occurred mainly for category II, that is, when one of the storage zones had a much longer mean retention time (ts ratio > 5.0) and when the dominance of storage capacity and exchange flux occurred in different storage zones. We also used the one storage zone model to estimate a “single” lumped rate constant representing the net removal of a solute by biogeochemical reactions in multiple storage zones. For most cases the lumped rate constant that was optimized by one storage zone modeling estimated the flux‐weighted rate constant for multiple storage zones. Our results explain how the relative hydrologic properties of multiple storage zones (retention time, storage capacity, exchange flux, and biogeochemical reaction rate constant) affect the reliability of lumped parameters determined by a one storage zone transport model. We conclude that stream transport models with a single storage compartment will in most cases reliably characterize the dominant physical processes of solute retention and biogeochemical reactions in streams with multiple storage zones.

  8. Algorithms for highway-speed acoustic impact-echo evaluation of concrete bridge decks

    NASA Astrophysics Data System (ADS)

    Mazzeo, Brian A.; Guthrie, W. Spencer

    2018-04-01

    A new acoustic impact-echo testing device has been developed for detecting and mapping delaminations in concrete bridge decks at highway speeds. The apparatus produces nearly continuous acoustic excitation of concrete bridge decks through rolling mats of chains that are placed around six wheels mounted to a hinged trailer. The wheels approximately span the width of a traffic lane, and the ability to remotely lower and raise the apparatus using a winch system allows continuous data collection without stationary traffic control or exposure of personnel to traffic. Microphones near the wheels are used to record the acoustic response of the bridge deck during testing. In conjunction with the development of this new apparatus, advances in the algorithms required for data analysis were needed. This paper describes the general framework of the algorithms developed for converting differential global positioning system data and multi-channel audio data into maps that can be used in support of engineering decisions about bridge deck maintenance, rehabilitation, and replacement (MR&R). Acquisition of position and audio data is coordinated on a laptop computer through a custom graphical user interface. All of the streams of data are synchronized with the universal computer time so that audio data can be associated with interpolated position information through data post-processing. The audio segments are individually processed according to particular detection algorithms that can adapt to variations in microphone sensitivity or particular chain excitations. Features that are greater than a predetermined threshold, which is held constant throughout the analysis, are then subjected to further analysis and included in a map that shows the results of the testing. Maps of data collected on a bridge deck using the new acoustic impact-echo testing device at different speeds ranging from approximately 10 km/h to 55 km/h indicate that the collected data are reasonably repeatable. Use of the new acoustic impact-echo testing device is expected to enable more informed decisions about MR&R of concrete bridge decks.

  9. Use of multiple dispersal pathways facilitates amphibian persistence in stream networks.

    PubMed

    Campbell Grant, Evan H; Nichols, James D; Lowe, Winsor H; Fagan, William F

    2010-04-13

    Although populations of amphibians are declining worldwide, there is no evidence that salamanders occupying small streams are experiencing enigmatic declines, and populations of these species seem stable. Theory predicts that dispersal through multiple pathways can stabilize populations, preventing extinction in habitat networks. However, empirical data to support this prediction are absent for most species, especially those at risk of decline. Our mark-recapture study of stream salamanders reveals both a strong upstream bias in dispersal and a surprisingly high rate of overland dispersal to adjacent headwater streams. This evidence of route-dependent variation in dispersal rates suggests a spatial mechanism for population stability in headwater-stream salamanders. Our results link the movement behavior of stream salamanders to network topology, and they underscore the importance of identifying and protecting critical dispersal pathways when addressing region-wide population declines.

  10. Use of multiple dispersal pathways facilitates amphibian persistence in stream networks

    USGS Publications Warehouse

    Campbell, Grant E.H.; Nichols, J.D.; Lowe, W.H.; Fagan, W.F.

    2010-01-01

    Although populations of amphibians are declining worldwide, there is no evidence that salamanders occupying small streams are experiencing enigmatic declines, and populations of these species seem stable. Theory predicts that dispersal through multiple pathways can stabilize populations, preventing extinction in habitat networks. However, empirical data to support this prediction are absent for most species, especially those at risk of decline. Our mark-recapture study of stream salamanders reveals both a strong upstream bias in dispersal and a surprisingly high rate of overland dispersal to adjacent headwater streams. This evidence of route-dependent variation in dispersal rates suggests a spatial mechanism for population stability in headwater-stream salamanders. Our results link the movement behavior of stream salamanders to network topology, and they underscore the importance of identifying and protecting critical dispersal pathways when addressing region-wide population declines.

  11. Use of multiple dispersal pathways facilitates amphibian persistence in stream networks

    PubMed Central

    Campbell Grant, Evan H.; Nichols, James D.; Lowe, Winsor H.; Fagan, William F.

    2010-01-01

    Although populations of amphibians are declining worldwide, there is no evidence that salamanders occupying small streams are experiencing enigmatic declines, and populations of these species seem stable. Theory predicts that dispersal through multiple pathways can stabilize populations, preventing extinction in habitat networks. However, empirical data to support this prediction are absent for most species, especially those at risk of decline. Our mark-recapture study of stream salamanders reveals both a strong upstream bias in dispersal and a surprisingly high rate of overland dispersal to adjacent headwater streams. This evidence of route-dependent variation in dispersal rates suggests a spatial mechanism for population stability in headwater-stream salamanders. Our results link the movement behavior of stream salamanders to network topology, and they underscore the importance of identifying and protecting critical dispersal pathways when addressing region-wide population declines. PMID:20351269

  12. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  13. A secure cluster-based multipath routing protocol for WMSNs.

    PubMed

    Almalkawi, Islam T; Zapata, Manel Guerrero; Al-Karaki, Jamal N

    2011-01-01

    The new characteristics of Wireless Multimedia Sensor Network (WMSN) and its design issues brought by handling different traffic classes of multimedia content (video streams, audio, and still images) as well as scalar data over the network, make the proposed routing protocols for typical WSNs not directly applicable for WMSNs. Handling real-time multimedia data requires both energy efficiency and QoS assurance in order to ensure efficient utility of different capabilities of sensor resources and correct delivery of collected information. In this paper, we propose a Secure Cluster-based Multipath Routing protocol for WMSNs, SCMR, to satisfy the requirements of delivering different data types and support high data rate multimedia traffic. SCMR exploits the hierarchical structure of powerful cluster heads and the optimized multiple paths to support timeliness and reliable high data rate multimedia communication with minimum energy dissipation. Also, we present a light-weight distributed security mechanism of key management in order to secure the communication between sensor nodes and protect the network against different types of attacks. Performance evaluation from simulation results demonstrates a significant performance improvement comparing with existing protocols (which do not even provide any kind of security feature) in terms of average end-to-end delay, network throughput, packet delivery ratio, and energy consumption.

  14. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  15. A Secure Cluster-Based Multipath Routing Protocol for WMSNs

    PubMed Central

    Almalkawi, Islam T.; Zapata, Manel Guerrero; Al-Karaki, Jamal N.

    2011-01-01

    The new characteristics of Wireless Multimedia Sensor Network (WMSN) and its design issues brought by handling different traffic classes of multimedia content (video streams, audio, and still images) as well as scalar data over the network, make the proposed routing protocols for typical WSNs not directly applicable for WMSNs. Handling real-time multimedia data requires both energy efficiency and QoS assurance in order to ensure efficient utility of different capabilities of sensor resources and correct delivery of collected information. In this paper, we propose a Secure Cluster-based Multipath Routing protocol for WMSNs, SCMR, to satisfy the requirements of delivering different data types and support high data rate multimedia traffic. SCMR exploits the hierarchical structure of powerful cluster heads and the optimized multiple paths to support timeliness and reliable high data rate multimedia communication with minimum energy dissipation. Also, we present a light-weight distributed security mechanism of key management in order to secure the communication between sensor nodes and protect the network against different types of attacks. Performance evaluation from simulation results demonstrates a significant performance improvement comparing with existing protocols (which do not even provide any kind of security feature) in terms of average end-to-end delay, network throughput, packet delivery ratio, and energy consumption. PMID:22163854

  16. Analysis of threats to research validity introduced by audio recording clinic visits: Selection bias, Hawthorne effect, both, or neither?

    PubMed Central

    Henry, Stephen G.; Jerant, Anthony; Iosif, Ana-Maria; Feldman, Mitchell D.; Cipri, Camille; Kravitz, Richard L.

    2015-01-01

    Objective To identify factors associated with participant consent to record visits; to estimate effects of recording on patient-clinician interactions Methods Secondary analysis of data from a randomized trial studying communication about depression; participants were asked for optional consent to audio record study visits. Multiple logistic regression was used to model likelihood of patient and clinician consent. Multivariable regression and propensity score analyses were used to estimate effects of audio recording on 6 dependent variables: discussion of depressive symptoms, preventive health, and depression diagnosis; depression treatment recommendations; visit length; visit difficulty. Results Of 867 visits involving 135 primary care clinicians, 39% were recorded. For clinicians, only working in academic settings (P=0.003) and having worked longer at their current practice (P=0.02) were associated with increased likelihood of consent. For patients, white race (P=0.002) and diabetes (P=0.03) were associated with increased likelihood of consent. Neither multivariable regression nor propensity score analyses revealed any significant effects of recording on the variables examined. Conclusion Few clinician or patient characteristics were significantly associated with consent. Audio recording had no significant effect on any dependent variables. Practice Implications Benefits of recording clinic visits likely outweigh the risks of bias in this setting. PMID:25837372

  17. Fast algorithm for automatically computing Strahler stream order

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  18. Multimodal integration of micro-Doppler sonar and auditory signals for behavior classification with convolutional networks.

    PubMed

    Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas

    2013-10-01

    The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance.

  19. Acquisition and management of continuous data streams for crop water management

    USDA-ARS?s Scientific Manuscript database

    Wireless sensor network systems for decision support in crop water management offer many advantages including larger spatial coverage and multiple types of data input. However, collection and management of multiple and continuous data streams for near real-time post analysis can be problematic. Thi...

  20. Podcasting the Anthropocene as a model for student and faculty science communication

    NASA Astrophysics Data System (ADS)

    Osborne, M. C.; Traer, M. M.; Hayden, T.

    2015-12-01

    Generation Anthropocene is an ongoing audio podcast that is produced at Stanford and is currently being hosted by Smithsonian.com. This experimental program invovles teaching a project-based course whereby students pitch, collaborate, and produce audio stories under the guidance of the instructors. Stories are then published online and are freely available to the general public via multiple outlets, including iTunes, genanthro.com, and Smithsonian.com. Here we describe how the program came into existence, how it is currently operating, and how we view the show as a model for integrating curriculum with outreach. We also present data about our listenership and explore potential for the project to expand to other academic institutions.

  1. Multiple bio-monitoring system using visible light for electromagnetic-wave free indoor healthcare

    NASA Astrophysics Data System (ADS)

    An, Jinyoung; Pham, Ngoc Quan; Chung, Wan-Young

    2017-12-01

    In this paper, a multiple biomedical data transmission system with visible light communication (VLC) is proposed for an electromagnetic-wave-free indoor healthcare. VLC technology has emerged as an alternative solution to radio-frequency (RF) wireless systems, due to its various merits, e.g., ubiquity, power efficiency, no RF radiation, and security. With VLC, critical bio-medical signals, including electrocardiography (ECG), can be transmitted in places where RF radiation is restricted. This potential advantage of VLC could save more lives in emergency situations. A time hopping (TH) scheme is employed to transfer multiple medical-data streams in real time with a simple system design. Multiple data streams are transmitted using identical color LEDs and go into an optical detector. The received multiple data streams are demodulated and rearranged using a TH-based demodulator. The medical data is then monitored and managed to provide the necessary medical care for each patient.

  2. Code division multiple access signaling for modulated reflector technology

    DOEpatents

    Briles, Scott D [Los Alamos, NM

    2012-05-01

    A method and apparatus for utilizing code division multiple access in modulated reflectance transmissions comprises the steps of generating a phase-modulated reflectance data bit stream; modifying the modulated reflectance data bit stream; providing the modified modulated reflectance data bit stream to a switch that connects an antenna to an infinite impedance in the event a "+1" is to be sent, or connects the antenna to ground in the event a "0" or a "-1" is to be sent.

  3. Audio spectrum and sound pressure levels vary between pulse oximeters.

    PubMed

    Chandra, Deven; Tessler, Michael J; Usher, John

    2006-01-01

    The variable-pitch pulse oximeter is an important intraoperative patient monitor. Our ability to hear its auditory signal depends on its acoustical properties and our hearing. This study quantitatively describes the audio spectrum and sound pressure levels of the monitoring tones produced by five variable-pitch pulse oximeters. We compared the Datex-Ohmeda Capnomac Ultima, Hewlett-Packard M1166A, Datex-Engstrom AS/3, Ohmeda Biox 3700, and Datex-Ohmeda 3800 oximeters. Three machines of each of the five models were assessed for sound pressure levels (using a precision sound level meter) and audio spectrum (using a hanning windowed fast Fourier trans-form of three beats at saturations of 99%, 90%, and 85%). The widest range of sound pressure levels was produced by the Hewlett-Packard M1166A (46.5 +/- 1.74 dB to 76.9 +/- 2.77 dB). The loudest model was the Datex-Engstrom AS/3 (89.2 +/- 5.36 dB). Three oximeters, when set to the lower ranges of their volume settings, were indistinguishable from background operating room noise. Each model produced sounds with different audio spectra. Although each model produced a fundamental tone with multiple harmonic overtones, the number of harmonics varied with each model; from three harmonic tones on the Hewlett-Packard M1166A, to 12 on the Ohmeda Biox 3700. There were variations between models, and individual machines of the same model with respect to the fundamental tone associated with a given saturation. There is considerable variance in the sound pressure and audio spectrum of commercially-available pulse oximeters. Further studies are warranted in order to establish standards.

  4. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  5. Audiovisual Fundamentals; Basic Equipment Operation and Simple Materials Production.

    ERIC Educational Resources Information Center

    Bullard, John R.; Mether, Calvin E.

    A guide illustrated with simple sketches explains the functions and step-by-step uses of audiovisual (AV) equipment. Principles of projection, audio, AV equipment, lettering, limited-quantity and quantity duplication, and materials preservation are outlined. Apparatus discussed include overhead, opaque, slide-filmstrip, and multiple-loading slide…

  6. Response of nutrients, biofilm, and benthic insects to salmon carcass addition.

    Treesearch

    Shannon M. Claeson; Judith L. Li; Jana E. Compton; Peter A. Bisson

    2006-01-01

    Salmon carcass addition to streams is expected to increase stream productivity at multiple trophic levels. This study examined stream nutrient (nitrogen, phosphorus, and carbon), epilithic biofilm (ash-free dry mass and chlorophyll a), leaf-litter decomposition, and macroinvertebrate (density and biomass) responses to carcass addition in three headwater streams of...

  7. Models of Tidally Induced Gas Filaments in the Magellanic Stream

    NASA Astrophysics Data System (ADS)

    Pardy, Stephen A.; D’Onghia, Elena; Fox, Andrew J.

    2018-04-01

    The Magellanic Stream and Leading Arm of H I that stretches from the Large and Small Magellanic Clouds (LMC and SMC) and over 200° of the Southern sky is thought to be formed from multiple encounters between the LMC and SMC. In this scenario, most of the gas in the Stream and Leading Arm is stripped from the SMC, yet recent observations have shown a bifurcation of the Trailing Arm that reveals LMC origins for some of the gas. Absorption measurements in the Stream also reveal an order of magnitude more gas than in current tidal models. We present hydrodynamical simulations of the multiple encounters between the LMC and SMC at their first pass around the Milky Way, assuming that the Clouds were more extended and gas-rich in the past. Our models create filamentary structures of gas in the Trailing Stream from both the LMC and SMC. While the SMC trailing filament matches the observed Stream location, the LMC filament is offset. In addition, the total observed mass of the Stream in these models is underestimated by a factor of four when the ionized component is accounted for. Our results suggest that there should also be gas stripped from both the LMC and SMC in the Leading Arm, mirroring the bifurcation in the Trailing Stream. This prediction is consistent with recent measurements of spatial variation in chemical abundances in the Leading Arm, which show that gas from multiple sources is present, although its nature is still uncertain.

  8. Audio-visual speech cue combination.

    PubMed

    Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick

    2010-04-16

    Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

  9. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  10. Principals' Perceptions of Successful Leadership

    ERIC Educational Resources Information Center

    Childers, Gary L.

    2013-01-01

    The purposes of this qualitative multiple case study were to determine the catalysts and pathways that caused principals to move from managers to effective leaders. Data were collected through a series of interviews with 4 principals who were selected through a purposeful sampling procedure. The interviews were audio recorded, transcribed, and…

  11. Characteristics of Middle School Students Learning Actions in Outdoor Mathematical Activities with the Cellular Phone

    ERIC Educational Resources Information Center

    Daher, Wajeeh; Baya'a, Nimer

    2012-01-01

    Learning in the cellular phone environment enables utilizing the multiple functions of the cellular phone, such as mobility, availability, interactivity, verbal and voice communication, taking pictures or recording audio and video, measuring time and transferring information. These functions together with mathematics-designated cellular phone…

  12. Predicting macroinvertebrate MMI for geographic targeting

    EPA Science Inventory

    The US Environmental Protection Agency surveys the ecological conditions of streams across broad regions. We wish to identify specific streams in poor condition, as well as their regional extent. To identify such streams in Idaho, Oregon and Washington we built multiple regress...

  13. RESPONSE OF NUTRIENTS, BIOFILM, AND BENTHIC INSECTS TO SALMON CARCASS ADDITION

    EPA Science Inventory

    Salmon carcass addition to streams is expected to increase stream productivity at multiple trophic levels. This study examined stream nutrient (nitrogen, phosphorus, and carbon), epilithic biofilm (ash-free dry mass and chlorophyll a), leaf-litter decomposition, and macroinverte...

  14. Influence of wood on invertebrate communities in streams and rivers

    Treesearch

    Arthur Benke; J. Bruce Wallace

    2010-01-01

    Wood plays a major role in creating multiple invertebrate habitats in small streams and large rivers. In small streams, wood debris dams are instrumental in creating a step and pool profile of habitats, enhancing habitat heterogeneity, retaining organic matter, and changing current velocity. Beavers can convert sections of free-flowing streams into ponds and wetlands...

  15. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  16. Converting laserdisc video to digital video: a demonstration project using brain animations.

    PubMed

    Jao, C S; Hier, D B; Brint, S U

    1995-01-01

    Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.

  17. Responses of stream microbes to multiple anthropogenic stressors in a mesocosm study.

    PubMed

    Nuy, Julia K; Lange, Anja; Beermann, Arne J; Jensen, Manfred; Elbrecht, Vasco; Röhl, Oliver; Peršoh, Derek; Begerow, Dominik; Leese, Florian; Boenigk, Jens

    2018-08-15

    Stream ecosystems are affected by multiple anthropogenic stressors worldwide. Even though effects of many single stressors are comparatively well studied, the effects of multiple stressors are difficult to predict. In particular bacteria and protists, which are responsible for the majority of ecosystem respiration and element flows, are infrequently studied with respect to multiple stressors responses. We conducted a stream mesocosm experiment to characterize the responses of single and multiple stressors on microbiota. Two functionally important stream habitats, leaf litter and benthic phototrophic rock biofilms, were exposed to three stressors in a full factorial design: fine sediment deposition, increased chloride concentration (salinization) and reduced flow velocity. We analyzed the microbial composition in the two habitat types of the mesocosms using an amplicon sequencing approach. Community analysis on different taxonomic levels as well as principle component analyses (PCoAs) based on realtive abundances of operational taxonomic units (OTUs) showed treatment specific shifts in the eukaryotic biofilm community. Analysis of variance (ANOVA) revealed that Bacillariophyta responded positively salinity and sediment increase, while the relative read abundance of chlorophyte taxa decreased. The combined effects of multiple stressors were mainly antagonistic. Therefore, the community composition in multiply stressed environments resembled the composition of the unstressed control community in terms of OTU occurrence and relative abundances. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Tracing Nitrate Contributions to Streams During Varying Flow Regimes at the Sleepers River Research Watershed, Vermont, USA

    NASA Astrophysics Data System (ADS)

    Sebestyen, S. D.; Shanley, J. B.; Boyer, E. W.; Ohte, N.; Doctor, D. H.; Kendall, C.

    2003-12-01

    Quantifying sources and transformations of nitrate in headwater catchments is fundamental to understanding the movement of nitrogen to streams. At the Sleepers River Research Watershed in northeastern Vermont (USA), we are using multiple chemical tracer and mixing model approaches to quantify sources and transport of nitrate to streams under varying flow regimes. We sampled streams, lysimeters, and wells at nested locations from the headwaters to the outlet of the 41 ha W-9 watershed under the entire range of flow regimes observed throughout 2002-2003, including baseflow and multiple events (stormflow and snowmelt). Our results suggest that nitrogen sources, and consequently stream nitrate concentrations, are rapidly regenerated during several weeks of baseflow and nitrogen is flushed from the watershed by stormflow events that follow baseflow periods. Both basic chemistry data (anions, cations, & dissolved organic carbon) and isotopic data (nitrate, dissolved organic carbon, and dissolved inorganic carbon) indicate that nitrogen source contributions vary depending upon the extent of saturation in the watershed, the initiation of shallow subsurface water inputs, and other hydrological processes. Stream nitrate concentrations typically peak with discharge and are higher on the falling than the rising limb of the hydrograph. Our data also indicate the importance of terrestrial and aquatic biogeochemical processes, in addition to hydrological connectivity in controlling how nitrate moves from the terrestrial landscape to streams. Our detailed sampling data from multiple flow regimes are helping to identify and quantify the "hot spots" and "hot moments" of biogeochemical and hydrological processes that control nitrogen fluxes in streams.

  19. Audio-Visual Speech Perception: A Developmental ERP Investigation

    ERIC Educational Resources Information Center

    Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…

  20. Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management

    ERIC Educational Resources Information Center

    Osawa, Noritaka; Asai, Kikuo

    2008-01-01

    A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…

  1. Joint Doctrine for Unmanned Aircraft Systems: The Air Force and the Army Hold the Key to Success

    DTIC Science & Technology

    2010-05-03

    concept, coupled with sensor technologies that provide multiple video streams to multiple ground units, delivers increased capability and capacity to...airborne surveillance” allow one UAS to collect up to ten video transmissions, sending them to ten different users on the ground. Future iterations...of this technology, dubbed Gorgon Stare, will increase to as many as 65 video streams per UAS by 2014. 31 Being able to send multiple views of an

  2. Web server for priority ordered multimedia services

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  3. LAND USE AND LOTIC DIATOM ASSEMBLAGES: A MULTI-SPATIAL AND TEMPORAL ASSESSMENT

    EPA Science Inventory

    We assessed the effects of land-use at multiple spatial scales (e.g., catchment, stream network, and stream reach) on periphyton from 25 wadeable streams along a land-use gradient in the Willamette River Basin, Oregon, in a dry season. Additional water chemistry samples were col...

  4. Live Aircraft Encounter Visualization at FutureFlight Central

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John

    2018-01-01

    Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.

  5. Biotic interactions modify multiple-stressor effects on juvenile brown trout in an experimental stream food web.

    PubMed

    Bruder, Andreas; Salis, Romana K; Jones, Peter E; Matthaei, Christoph D

    2017-09-01

    Agricultural land use results in multiple stressors affecting stream ecosystems. Flow reduction due to water abstraction, elevated levels of nutrients and chemical contaminants are common agricultural stressors worldwide. Concurrently, stream ecosystems are also increasingly affected by climate change. Interactions among multiple co-occurring stressors result in biological responses that cannot be predicted from single-stressor effects (i.e. synergisms and antagonisms). At the ecosystem level, multiple-stressor effects can be further modified by biotic interactions (e.g. trophic interactions). We conducted a field experiment using 128 flow-through stream mesocosms to examine the individual and combined effects of water abstraction, nutrient enrichment and elevated levels of the nitrification inhibitor dicyandiamide (DCD) on survival, condition and gut content of juvenile brown trout and on benthic abundance of their invertebrate prey. Flow velocity reduction decreased fish survival (-12% compared to controls) and condition (-8% compared to initial condition), whereas effects of nutrient and DCD additions and interactions among these stressors were not significant. Negative effects of flow velocity reduction on fish survival and condition were consistent with effects on fish gut content (-25% compared to controls) and abundance of dominant invertebrate prey (-30% compared to controls), suggesting a negative metabolic balance driving fish mortality and condition decline, which was confirmed by structural equation modelling. Fish mortality under reduced flow velocity increased as maximal daily water temperatures approached the upper limit of their tolerance range, reflecting synergistic interactions between these stressors. Our study highlights the importance of indirect stressor effects such as those transferred through trophic interactions, which need to be considered when assessing and managing fish populations and stream food webs in multiple-stressor situations. However, in real streams, compensatory mechanisms and behavioural responses, as well as seasonal and spatial variation, may alter the intensity of stressor effects and the sensitivity of trout populations. © 2017 John Wiley & Sons Ltd.

  6. A Multiple-Tracer Approach for Identifying Sewage Sources to an Urban Stream System

    USGS Publications Warehouse

    Hyer, Kenneth Edward

    2007-01-01

    The presence of human-derived fecal coliform bacteria (sewage) in streams and rivers is recognized as a human health hazard. The source of these human-derived bacteria, however, is often difficult to identify and eliminate, because sewage can be delivered to streams through a variety of mechanisms, such as leaking sanitary sewers or private lateral lines, cross-connected pipes, straight pipes, sewer-line overflows, illicit dumping of septic waste, and vagrancy. A multiple-tracer study was conducted to identify site-specific sources of sewage in Accotink Creek, an urban stream in Fairfax County, Virginia, that is listed on the Commonwealth's priority list of impaired streams for violations of the fecal coliform bacteria standard. Beyond developing this multiple-tracer approach for locating sources of sewage inputs to Accotink Creek, the second objective of the study was to demonstrate how the multiple-tracer approach can be applied to other streams affected by sewage sources. The tracers used in this study were separated into indicator tracers, which are relatively simple and inexpensive to apply, and confirmatory tracers, which are relatively difficult and expensive to analyze. Indicator tracers include fecal coliform bacteria, surfactants, boron, chloride, chloride/bromide ratio, specific conductance, dissolved oxygen, turbidity, and water temperature. Confirmatory tracers include 13 organic compounds that are associated with human waste, including caffeine, cotinine, triclosan, a number of detergent metabolites, several fragrances, and several plasticizers. To identify sources of sewage to Accotink Creek, a detailed investigation of the Accotink Creek main channel, tributaries, and flowing storm drains was undertaken from 2001 to 2004. Sampling was conducted in a series of eight synoptic sampling events, each of which began at the most downstream site and extended upstream through the watershed and into the headwaters of each tributary. Using the synoptic sampling approach, 149 sites were sampled at least one time for indicator tracers; 52 of these sites also were sampled for confirmatory tracers at least one time. Through the analysis of multiple-tracer levels in the synoptic samples, three major sewage sources to the Accotink Creek stream network were identified, and several other minor sewage sources to the Accotink Creek system likely deserve additional investigation. Near the end of the synoptic sampling activities, three additional sampling methods were used to gain better understanding of the potential for sewage sources to the watershed. These additional sampling methods included optical brightener monitoring, intensive stream sampling using automated samplers, and additional sampling of several storm-drain networks. The samples obtained by these methods provided further understanding of possible sewage sources to the streams and a better understanding of the variability in the tracer concentrations at a given sampling site. Collectively, these additional sampling methods were a valuable complement to the synoptic sampling approach that was used for the bulk of this study. The study results provide an approach for local authorities to use in applying a relatively simple and inexpensive collection of tracers to locate sewage sources to streams. Although this multiple-tracer approach is effective in detecting sewage sources to streams, additional research is needed to better detect extremely low-volume sewage sources and better enable local authorities to identify the specific sources of the sewage once it is detected in a stream reach.

  7. Quantifying nutrient sources in an upland catchment using multiple chemical and isotopic tracers

    NASA Astrophysics Data System (ADS)

    Sebestyen, S. D.; Boyer, E. W.; Shanley, J. B.; Doctor, D. H.; Kendall, C.; Aiken, G. R.

    2006-12-01

    To explore processes that control the temporal variation of nutrients in surface waters, we measured multiple environmental tracers at the Sleepers River Research Watershed, an upland catchment in northeastern Vermont, USA. Using a set of high-frequency stream water samples, we quantified the variation of nutrients over a range of stream flow conditions with chemical and isotopic tracers of water, nitrate, and dissolved organic carbon (DOC). Stream water concentrations of nitrogen (predominantly in the forms of nitrate and dissolved organic nitrogen) and DOC reflected mixing of water contributed from distinct sources in the forested landscape. Water isotopic signatures and end-member mixing analysis revealed when solutes entered the stream from these sources and that the sources were linked to the stream by preferential shallow subsurface and overland flow paths. Results from the tracers indicated that freshly-leached, terrestrial organic matter was the overwhelming source of high DOC concentrations in stream water. In contrast, in this region where atmospheric nitrogen deposition is chronically elevated, the highest concentrations of stream nitrate were attributable to atmospheric sources that were transported via melting snow and rain fall. These findings are consistent with a conceptual model of the landscape in which coupled hydrological and biogeochemical processes interact to control stream solute variability over time.

  8. Using the Advocacy Coalition Framework and Multiple Streams policy theories to examine the role of evidence, research and other types of knowledge in drug policy.

    PubMed

    Ritter, Alison; Hughes, Caitlin Elizabeth; Lancaster, Kari; Hoppe, Robert

    2018-04-17

    The prevailing 'evidence-based policy' paradigm emphasizes a technical-rational relationship between alcohol and drug research evidence and subsequent policy action. However, policy process theories do not start with this premise, and hence provide an opportunity to consider anew the ways in which evidence, research and other types of knowledge impact upon policy. This paper presents a case study, the police deployment of drug detection dogs, to highlight how two prominent policy theories [the Advocacy Coalition Framework (ACF) and the Multiple Streams (MS) approach] explicate the relationship between evidence and policy. The two theories were interrogated with reference to their descriptions and framings of evidence, research and other types of knowledge. The case study methodology was employed to extract data concerned with evidence and other types of knowledge from a previous detailed historical account and analysis of drug detection dogs in one Australian state (New South Wales). Different types of knowledge employed across the case study were identified and coded, and then analysed with reference to each theory. A detailed analysis of one key 'evidence event' within the case study was also undertaken. Five types of knowledge were apparent in the case study: quantitative program data; practitioner knowledge; legal knowledge; academic research; and lay knowledge. The ACF highlights how these various types of knowledge are only influential inasmuch as they provide the opportunity to alter the beliefs of decision-makers. The MS highlights how multiple types of knowledge may or may not form part of the strategy of policy entrepreneurs to forge the confluence of problems, solutions and politics. Neither the Advocacy Coalition Framework nor the Multiple Streams approach presents an uncomplicated linear relationship between evidence and policy action, nor do they preference any one type of knowledge. The implications for research and practice include the contestation of evidence through beliefs (Advocacy Coalition Framework), the importance of venues for debate (Advocacy Coalition Framework), the way in which data and indicators are transformed into problem specification (Multiple Streams) and the importance of the policy ('alternatives') stream (Multiple Streams). © 2018 Society for the Study of Addiction.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruffey, Stephanie H.; Jubin, Robert Thomas; Jordan, J. A.

    U.S. regulations will require the removal of 129I from the off-gas streams of any used nuclear fuel (UNF) reprocessing plant prior to discharge of the off-gas to the environment. Multiple off-gas streams within a UNF reprocessing plant combine prior to release, and each of these streams contains some amount of iodine. For an aqueous UNF reprocessing plant, these streams include the dissolver off-gas, the cell off-gas, the vessel off-gas (VOG), the waste off-gas and the shear off-gas. To achieve regulatory compliance, treatment of multiple off-gas streams within the plant must be performed. Preliminary studies have been completed on the adsorptionmore » of I 2 onto silver mordenite (AgZ) from prototypical VOG streams. The study reported that AgZ did adsorb I 2 from a prototypical VOG stream, but process upsets resulted in an uneven feed stream concentration. The experiments described in this document both improve the characterization of I 2 adsorption by AgZ from dilute gas streams and further extend it to include characterization of the adsorption of organic iodides (in the form of CH 3I) onto AgZ under prototypical VOG conditions. The design of this extended duration testing was such that information about the rate of adsorption, the penetration of the iodine species, and the effect of sorbent aging on iodine removal in VOG conditions could be inferred.« less

  10. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  11. Sounding Out Science: Incorporating Audio Technology to Assist Students with Learning Differences in Science Education

    NASA Astrophysics Data System (ADS)

    Gomes, Clement V.

    With the current focus to have all students reach scientific literacy in the U.S, there exists a need to support marginalized students, such as those with Learning Disabilities/Differences (LD), to reach the same educational goals as their mainstream counterparts. This dissertation examines the benefits of using audio assistive technology on the iPad to support LD students to achieve comprehension of science vocabulary and semantics. This dissertation is composed of two papers, both of which include qualitative information supported by quantified data. The first paper, titled Using Technology to Overcome Fundamental Literacy Constraints for Students with Learning Differences to Achieve Scientific Literacy, provides quantified evidence from pretest and posttest analysis that audio technology can be beneficial for seventh grade LD students when learning new and unfamiliar science content. Analysis of observations and student interviews support the findings. The second paper, titled Time, Energy, and Motivation: Utilizing Technology to Ease Science Understanding for Students with Learning Differences, supports the importance of creating technology that is clear, audible, and easy for students to use so they benefit and desire to utilize the learning tool. Multiple correlation of Likert Survey analysis was used to identify four major items and was supported with analysis from observations of and interviews with students, parents, and educators. This study provides useful information to support the rising number of identified LD students and their parents and teachers by presenting the benefits of using audio assistive technology to learn science.

  12. Multi-scale Homogenization of Caddisfly Metacomminities in Human-modified Landscapes

    NASA Astrophysics Data System (ADS)

    Simião-Ferreira, Juliana; Nogueira, Denis Silva; Santos, Anna Claudia; De Marco, Paulo; Angelini, Ronaldo

    2018-04-01

    The multiple scale of stream networks spatial organization reflects the hierarchical arrangement of streams habitats with increasingly levels of complexity from sub-catchments until entire hydrographic basins. Through these multiple spatial scales, local stream habitats form nested subsets of increasingly landscape scale and habitat size with varying contributions of both alpha and beta diversity for the regional diversity. Here, we aimed to test the relative importance of multiple nested hierarchical levels of spatial scales while determining alpha and beta diversity of caddisflies in regions with different levels of landscape degradation in a core Cerrado area in Brazil. We used quantitative environmental variables to test the hypothesis that landscape homogenization affects the contribution of alpha and beta diversity of caddisflies to regional diversity. We found that the contribution of alpha and beta diversity for gamma diversity varied according to landscape degradation. Sub-catchments with more intense agriculture had lower diversity at multiple levels, markedly alpha and beta diversities. We have also found that environmental predictors mainly associated with water quality, channel size, and habitat integrity (lower scores indicate stream degradation) were related to community dissimilarity at the catchment scale. For an effective management of the headwater biodiversity of caddisfly, towards the conservation of these catchments, heterogeneous streams with more pristine riparian vegetation found within the river basin need to be preserved in protected areas. Additionally, in the most degraded areas the restoration of riparian vegetation and size increase of protected areas will be needed to accomplish such effort.

  13. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    NASA Astrophysics Data System (ADS)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude (σl<~1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which are physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S3, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated.

  14. A Mesoscale Total Dissolved Solids Quantity and Quality Study Integrating Responses of Multiple Biological Components in Small Stream Communities

    EPA Science Inventory

    A 42-day dosing test with ions comprising an excess TDS was run using mesocosms colonized with natural stream water fed continuously. In gridded gravel beds biota from microbes through macroinvertebrates are measured and interact in a manner realistic of stream riffle/run ecology...

  15. Nonverbal Vocal Communication of Emotions in Interviews with Child and Adolescent Psychoanalysts

    ERIC Educational Resources Information Center

    Tokgoz, Tuba

    2014-01-01

    This exploratory study attempted to examine both the words and the prosody/melody of the language within the framework of Bucci's Multiple Code Theory. The sample consisted of twelve audio-recorded, semi-structured interviews of child and adolescent psychoanalysts who were asked to describe their work with patients. It is observed that emotionally…

  16. Using Audio Script Fading and Multiple-Exemplar Training to Increase Vocal Interactions in Children with Autism

    ERIC Educational Resources Information Center

    Garcia-Albea, Elena; Reeve, Sharon A.; Brothers, Kevin J.; Reeve, Kenneth F.

    2014-01-01

    Script-fading procedures have been shown to be effective for teaching children with autism to initiate and participate in social interactions without vocal prompts from adults. In previous script and script-fading research, however, there has been no demonstration of a generalized repertoire of vocal interactions under the control of naturally…

  17. Effects of Asynchronous Audio Feedback on the Story Revision Practices of Students with Emotional/ Behavioral Disorders

    ERIC Educational Resources Information Center

    McKeown, Debra; Kimball, Kathleen; Ledford, Jennifer

    2015-01-01

    Young writers, especially students with disabilities, have difficulty writing complete essays, and when asked to revise often make only surface-level changes. Individualized feedback may lead to gains in writing achievement, but finding class time for feedback is difficult. Using a multiple probe across participants design, the effectiveness of…

  18. Action Research to Improve Methods of Delivery and Feedback in an Access Grid Room Environment

    ERIC Educational Resources Information Center

    McArthur, Lynne C.; Klass, Lara; Eberhard, Andrew; Stacey, Andrew

    2011-01-01

    This article describes a qualitative study which was undertaken to improve the delivery methods and feedback opportunity in honours mathematics lectures which are delivered through Access Grid Rooms. Access Grid Rooms are facilities that provide two-way video and audio interactivity across multiple sites, with the inclusion of smart boards. The…

  19. Digital dashboard design using multiple data streams for disease surveillance with influenza surveillance as an example.

    PubMed

    Cheng, Calvin K Y; Ip, Dennis K M; Cowling, Benjamin J; Ho, Lai Ming; Leung, Gabriel M; Lau, Eric H Y

    2011-10-14

    Great strides have been made exploring and exploiting new and different sources of disease surveillance data and developing robust statistical methods for analyzing the collected data. However, there has been less research in the area of dissemination. Proper dissemination of surveillance data can facilitate the end user's taking of appropriate actions, thus maximizing the utility of effort taken from upstream of the surveillance-to-action loop. The aims of the study were to develop a generic framework for a digital dashboard incorporating features of efficient dashboard design and to demonstrate this framework by specific application to influenza surveillance in Hong Kong. Based on the merits of the national websites and principles of efficient dashboard design, we designed an automated influenza surveillance digital dashboard as a demonstration of efficient dissemination of surveillance data. We developed the system to synthesize and display multiple sources of influenza surveillance data streams in the dashboard. Different algorithms can be implemented in the dashboard for incorporating all surveillance data streams to describe the overall influenza activity. We designed and implemented an influenza surveillance dashboard that utilized self-explanatory figures to display multiple surveillance data streams in panels. Indicators for individual data streams as well as for overall influenza activity were summarized in the main page, which can be read at a glance. Data retrieval function was also incorporated to allow data sharing in standard format. The influenza surveillance dashboard serves as a template to illustrate the efficient synthesization and dissemination of multiple-source surveillance data, which may also be applied to other diseases. Surveillance data from multiple sources can be disseminated efficiently using a dashboard design that facilitates the translation of surveillance information to public health actions.

  20. Digital Dashboard Design Using Multiple Data Streams for Disease Surveillance With Influenza Surveillance as an Example

    PubMed Central

    Cheng, Calvin KY; Ip, Dennis KM; Cowling, Benjamin J; Ho, Lai Ming; Leung, Gabriel M

    2011-01-01

    Background Great strides have been made exploring and exploiting new and different sources of disease surveillance data and developing robust statistical methods for analyzing the collected data. However, there has been less research in the area of dissemination. Proper dissemination of surveillance data can facilitate the end user's taking of appropriate actions, thus maximizing the utility of effort taken from upstream of the surveillance-to-action loop. Objective The aims of the study were to develop a generic framework for a digital dashboard incorporating features of efficient dashboard design and to demonstrate this framework by specific application to influenza surveillance in Hong Kong. Methods Based on the merits of the national websites and principles of efficient dashboard design, we designed an automated influenza surveillance digital dashboard as a demonstration of efficient dissemination of surveillance data. We developed the system to synthesize and display multiple sources of influenza surveillance data streams in the dashboard. Different algorithms can be implemented in the dashboard for incorporating all surveillance data streams to describe the overall influenza activity. Results We designed and implemented an influenza surveillance dashboard that utilized self-explanatory figures to display multiple surveillance data streams in panels. Indicators for individual data streams as well as for overall influenza activity were summarized in the main page, which can be read at a glance. Data retrieval function was also incorporated to allow data sharing in standard format. Conclusions The influenza surveillance dashboard serves as a template to illustrate the efficient synthesization and dissemination of multiple-source surveillance data, which may also be applied to other diseases. Surveillance data from multiple sources can be disseminated efficiently using a dashboard design that facilitates the translation of surveillance information to public health actions. PMID:22001082

  1. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  2. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  3. The Health Policy Process in Vietnam: Going Beyond Kingdon’s Multiple Streams Theory

    PubMed Central

    Kane, Sumit

    2016-01-01

    This commentary reflects upon the article along three broad lines. It reflects on the theoretical choices and omissions, particularly highlighting why it is important to adapt the multiple streams framework (MSF) when applying it in a socio-political context like Vietnam’s. The commentary also reflects upon the analytical threads tackled by Ha et al; for instance, it highlights the opportunities offered by, and raises questions about the centrality of the Policy Entrepreneur in getting the policy onto the political agenda and in pushing it through. The commentary also dwells on the implications of the article for development aid policies and practices. Throughout, the commentary signposts possible themes for Ha et al to consider for further analysis, and more generally, for future research using Kingdon’s multiple streams theory. PMID:27694671

  4. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  5. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  6. Understanding Agenda Setting in State Educational Policy: An Application of Kingdon's Multiple Streams Model to the Formation of State Reading Policy

    ERIC Educational Resources Information Center

    Young, Tamara V.; Shepley, Thomas V.; Song, Mengli

    2010-01-01

    Drawing on interview data from reading policy actors in California, Michigan, and Texas, this study applied Kingdon's (1984, 1995) multiple streams model to explain how the issue of reading became prominent on the agenda of state governments during the latter half of the 1990s. A combination of factors influenced the status of a state's reading…

  7. An Advanced Commanding and Telemetry System

    NASA Astrophysics Data System (ADS)

    Hill, Maxwell G. G.

    The Loral Instrumentation System 500 configured as an Advanced Commanding and Telemetry System (ACTS) supports the acquisition of multiple telemetry downlink streams, and simultaneously supports multiple uplink command streams for today's satellite vehicles. By using industry and federal standards, the system is able to support, without relying on a host computer, a true distributed dataflow architecture that is complemented by state-of-the-art RISC-based workstations and file servers.

  8. Right-Brain/Left-Brain Integrated Associative Processor Employing Convertible Multiple-Instruction-Stream Multiple-Data-Stream Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Hitoshi; Ogawa, Makoto; Shibata, Tadashi

    2005-04-01

    A very large scale integrated circuit (VLSI) architecture for a multiple-instruction-stream multiple-data-stream (MIMD) associative processor has been proposed. The processor employs an architecture that enables seamless switching from associative operations to arithmetic operations. The MIMD element is convertible to a regular central processing unit (CPU) while maintaining its high performance as an associative processor. Therefore, the MIMD associative processor can perform not only on-chip perception, i.e., searching for the vector most similar to an input vector throughout the on-chip cache memory, but also arithmetic and logic operations similar to those in ordinary CPUs, both simultaneously in parallel processing. Three key technologies have been developed to generate the MIMD element: associative-operation-and-arithmetic-operation switchable calculation units, a versatile register control scheme within the MIMD element for flexible operations, and a short instruction set for minimizing the memory size for program storage. Key circuit blocks were designed and fabricated using 0.18 μm complementary metal-oxide-semiconductor (CMOS) technology. As a result, the full-featured MIMD element is estimated to be 3 mm2, showing the feasibility of an 8-parallel-MIMD-element associative processor in a single chip of 5 mm× 5 mm.

  9. The suitability of using dissolved gases to determine groundwater discharge to high gradient streams

    USGS Publications Warehouse

    Gleeson, Tom; Manning, Andrew H.; Popp, Andrea; Zane, Mathew; Clark, Jordan F.

    2018-01-01

    Determining groundwater discharge to streams using dissolved gases is known to be useful over a wide range of streamflow rates but the suitability of dissolved gas methods to determine discharge rates in high gradient mountain streams has not been sufficiently tested, even though headwater streams are critical as ecological habitats and water resources. The aim of this study is to test the suitability of using dissolved gases to determine groundwater discharge rates to high gradient streams by field experiments in a well-characterized, high gradient mountain stream and a literature review. At a reach scale (550 m) we combined stream and groundwater radon activity measurements with an in-stream SF6 tracer test. By means of numerical modeling we determined gas exchange velocities and derived very low groundwater discharge rates (∼15% of streamflow). These groundwater discharge rates are below the uncertainty range of physical streamflow measurements and consistent with temperature, specific conductance and streamflow measured at multiple locations along the reach. At a watershed-scale (4 km), we measured CFC-12 and δ18O concentrations and determined gas exchange velocities and groundwater discharge rates with the same numerical model. The groundwater discharge rates along the 4 km stream reach were highly variable, but were consistent with the values derived in the detailed study reach. Additionally, we synthesized literature values of gas exchange velocities for different stream gradients which show an empirical relationship that will be valuable in planning future dissolved gas studies on streams with various gradients. In sum, we show that multiple dissolved gas tracers can be used to determine groundwater discharge to high gradient mountain streams from reach to watershed scales.

  10. The suitability of using dissolved gases to determine groundwater discharge to high gradient streams

    NASA Astrophysics Data System (ADS)

    Gleeson, Tom; Manning, Andrew H.; Popp, Andrea; Zane, Matthew; Clark, Jordan F.

    2018-02-01

    Determining groundwater discharge to streams using dissolved gases is known to be useful over a wide range of streamflow rates but the suitability of dissolved gas methods to determine discharge rates in high gradient mountain streams has not been sufficiently tested, even though headwater streams are critical as ecological habitats and water resources. The aim of this study is to test the suitability of using dissolved gases to determine groundwater discharge rates to high gradient streams by field experiments in a well-characterized, high gradient mountain stream and a literature review. At a reach scale (550 m) we combined stream and groundwater radon activity measurements with an in-stream SF6 tracer test. By means of numerical modeling we determined gas exchange velocities and derived very low groundwater discharge rates (∼15% of streamflow). These groundwater discharge rates are below the uncertainty range of physical streamflow measurements and consistent with temperature, specific conductance and streamflow measured at multiple locations along the reach. At a watershed-scale (4 km), we measured CFC-12 and δ18O concentrations and determined gas exchange velocities and groundwater discharge rates with the same numerical model. The groundwater discharge rates along the 4 km stream reach were highly variable, but were consistent with the values derived in the detailed study reach. Additionally, we synthesized literature values of gas exchange velocities for different stream gradients which show an empirical relationship that will be valuable in planning future dissolved gas studies on streams with various gradients. In sum, we show that multiple dissolved gas tracers can be used to determine groundwater discharge to high gradient mountain streams from reach to watershed scales.

  11. Telemetry and Communication IP Video Player

    NASA Technical Reports Server (NTRS)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  12. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex

    PubMed Central

    Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.

    2009-01-01

    ‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492

  13. Improving hand functional use in subjects with multiple sclerosis using a musical keyboard: a randomized controlled trial.

    PubMed

    Gatti, Roberto; Tettamanti, Andrea; Lambiase, Simone; Rossi, Paolo; Comola, Mauro

    2015-06-01

    Playing an instrument implies neuroplasticity in different cerebral regions. This phenomenon has been described in subjects with stroke, suggesting that it could play a role in hand rehabilitation. The aim of this study is to analyse the effectiveness of playing a musical keyboard in improving hand function in subjects with multiple sclerosis. Nineteen hospitalized subjects were randomized in two groups: nine played a turned-on musical keyboard by sequences of fingers movements (audio feedback present) and 10 performed the same exercises on a turned-off musical keyboard (audio feedback absent). Training duration was half an hour per day for 15 days. Primary outcome was the perceived hand functional use measured by ABILHAND Questionnaire. Secondary outcomes were hand dexterity, measured by Nine-Hole Peg Test, and hand strength, measured by Jamar and Pinch dynamometers. Two-way analysis of variance was used for data analysis. The interaction time × group was significant (p = 0.003) for ABILHAND Questionnaire in favour of experimental group (mean between-group difference 0.99 logit [IC95%: 0.44; 1.54]). The two groups showed a significant time effect for all outcomes except for Jamar measure. Playing a musical keyboard seems a valid method to train the functional use of hands in subjects with multiple sclerosis. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  15. Independent transmission of sign language interpreter in DVB: assessment of image compression

    NASA Astrophysics Data System (ADS)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  16. Development of a Video Coding Scheme for Analyzing the Usability and Usefulness of Health Information Systems.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    Usability has been identified as a key issue in health informatics. Worldwide numerous projects have been carried out in an attempt to increase and optimize health system usability. Usability testing, involving observing end users interacting with systems, has been widely applied and numerous publications have appeared describing such studies. However, to date, fewer works have been published describing methodological approaches to analyzing the rich data stream that results from usability testing. This includes analysis of video, audio and screen recordings. In this paper we describe our work in the development and application of a coding scheme for analyzing the usability of health information systems. The phases involved in such analyses are described.

  17. Intern Abstract for Spring 2016

    NASA Technical Reports Server (NTRS)

    Gibson, William

    2016-01-01

    The Human Interface Branch - EV3 - is evaluating Organic lighting-emitting diodes (OLEDs) as an upgrade for current displays on future spacecraft. OLEDs have many advantages over current displays. Conventional displays require constant backlighting which draws a lot of power, but with OLEDs they generate light themselves. OLEDs are lighter, and weight is always a concern with space launches. OLEDs also grant greater viewing angles. OLEDs have been in the commercial market for almost ten years now. What is not known is how they will perform in a space-like environment; specifically deep space far away from the Earth's magnetosphere. In this environment, the OLEDs can be expected to experience vacuum and galactic radiation. The intern's responsibility has been to prepare the OLED for a battery of tests. Unfortunately, it will not be ready for testing at the end of the internship. That being said much progress has been made: a) Developed procedures to safely disassemble the tablet. b) Inventoried and identified critical electronic components. c) 3D printed a testing apparatus. d) Wrote software in Python that will test the OLED screen while being radiated. e) Built circuits to restart the tablet and the test pattern, and ensure it doesn't fall asleep during radiation testing. f) Built enclosure that will house all of the electronics Also, the intern has been working on a way to take messages from a simulated Caution and Warnings system, process said messages into packets, send audio packets to a multicast address that audio boxes are listening to, and output spoken audio. Currently, Cautions and Warnings use a tone to alert crew members of a situation, and then crew members have to read through their checklists to determine what the tone means. In urgent situations, EV3 wants to deliver concise and specific alerts to the crew to facilitate any mitigation efforts on their part. Significant progress was made on this project: a) Open channel with the simulated Caution and Warning system to acquire messages. b) Configure audio boxes. c) Grab pre-recorded audio files. d) Packetize the audio stream. A third project that was assigned to implement LED indicator modules for an Omnibus project. The Omnibus project is investigating better ways designing lighting for the interior of spacecraft-both spacecraft lighting and avionics box status lighting indication. The current scheme contains too much of the blue light spectrum that disrupts the sleep cycle. The LED indicator modules are to simulate the indicators running on a spacecraft. Lighting data will be gathered by human factors personal and use in a model underdevelopment to model spacecraft lighting. Significant progress was made on this project: Designed circuit layout a) Tested LEDs at LETF. b) Created GUI for the indicators. c) Created code for the Arduino to run that will illuminate the indicator modules.

  18. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude ({sigma}{sub l}(less-or-similar sign)1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which aremore » physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S{sub 3}, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated. (c) 2000 The American Astronomical Society.« less

  19. A Critical Examination of the Introduction of Drug Detection Dogs for Policing of Illicit Drugs in New South Wales, Australia Using Kingdon's "Multiple Streams" Heuristic

    ERIC Educational Resources Information Center

    Lancaster, Kari; Ritter, Alison; Hughes, Caitlin; Hoppe, Robert

    2017-01-01

    This paper critically analyses the introduction of drug detection dogs as a tool for policing of illicit drugs in New South Wales, Australia. Using Kingdon's "multiple streams" heuristic as a lens for analysis, we identify how the issue of drugs policing became prominent on the policy agenda, and the conditions under which the…

  20. Scheduling optimization of design stream line for production research and development projects

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming

    2017-05-01

    In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.

  1. The Health Policy Process in Vietnam: Going Beyond Kingdon's Multiple Streams TheoryComment on "Shaping the Health Policy Agenda: The Case of Safe Motherhood Policy in Vietnam".

    PubMed

    Kane, Sumit

    2016-04-25

    This commentary reflects upon the article along three broad lines. It reflects on the theoretical choices and omissions, particularly highlighting why it is important to adapt the multiple streams framework (MSF) when applying it in a socio-political context like Vietnam's. The commentary also reflects upon the analytical threads tackled by Ha et al; for instance, it highlights the opportunities offered by, and raises questions about the centrality of the Policy Entrepreneur in getting the policy onto the political agenda and in pushing it through. The commentary also dwells on the implications of the article for development aid policies and practices. Throughout, the commentary signposts possible themes for Ha et al to consider for further analysis, and more generally, for future research using Kingdon's multiple streams theory. © 2016 by Kerman University of Medical Sciences.

  2. Comparison of the Audio and Video Elements of Instructional Films; (Rapid Mass Learning). Technical Report.

    ERIC Educational Resources Information Center

    Nelson, H. E.; And Others

    Two experiments which compare the effectiveness of the auditory and visual elements in instructional films in order to study their relative contributions to learning are described in this paper. Two films dealing with aerodynamics were used in one experiment, and one film dealing with desert survival was used in the other. Multiple choice item…

  3. Multi-Modal Surrogates for Retrieving and Making Sense of Videos: Is Synchronization between the Multiple Modalities Optimal?

    ERIC Educational Resources Information Center

    Song, Yaxiao

    2010-01-01

    Video surrogates can help people quickly make sense of the content of a video before downloading or seeking more detailed information. Visual and audio features of a video are primary information carriers and might become important components of video retrieval and video sense-making. In the past decades, most research and development efforts on…

  4. Building a Web in Science Instruction: Using Multiple Resources in a Swedish Multilingual Middle School Class

    ERIC Educational Resources Information Center

    Jakobson, Britt; Axelsson, Monica

    2017-01-01

    This study, on the unit measuring time, examines classroom use of different resources and their affordances for students' meaning-making. The data, comprising audio and video recordings, fieldnotes, photographs and student texts, were collected during a lesson in a multilingual Swedish grade 5 classroom (students aged 11-12). In order to analyse…

  5. Resistance and change: a multiple streams approach to understanding health policy making in Ghana.

    PubMed

    Kusi-Ampofo, Owuraku; Church, John; Conteh, Charles; Heinmiller, B Timothy

    2015-02-01

    Although much has been written on health policy making in developed countries, the same cannot be said of less developed countries, especially in Africa. Drawing largely on available historical and government records, newspaper publications, parliamentary Hansards, and published books and articles, this article uses John W. Kingdon's multiple streams framework to explain how the problem, politics, and policy streams converged for Ghana's National Health Insurance Scheme (NHIS) to be passed into law in 2003. The article contends that a change in government in the 2000 general election opened a "policy window" for eventual policy change from "cash-and-carry" to the NHIS. Copyright © 2015 by Duke University Press.

  6. Using the storm water management model to predict urban headwater stream hydrological response to climate and land cover change

    Treesearch

    J.Y. Wu; J.R. Thompson; R.K. Kolka; K.J. Franz; T.W. Stewart

    2013-01-01

    Streams are natural features in urban landscapes that can provide ecosystem services for urban residents. However, urban streams are under increasing pressure caused by multiple anthropogenic impacts, including increases in human population and associated impervious surface area, and accelerated climate change. The ability to anticipate these changes and better...

  7. The effects of logging road construction on insect drop into a small coastal stream

    Treesearch

    Lloyd J. Hess

    1969-01-01

    Abstract - Because stream fisheries are so closely associated with forested watersheds, it is necessary that the streams and forests be managed jointly under a system of multiple use. This requires a knowledge of the interrelationships between these resources to yield maximum returns from both. It is the purpose of this paper to relate logging practices to fish...

  8. Exploring the persistence of stream-dwelling trout populations under alternative real-world turbidity regimes with an individual-based model

    Treesearch

    Bret C. Harvey; Steven F. Railsback

    2009-01-01

    We explored the effects of elevated turbidity on stream-resident populations of coastal cutthroat trout Oncorhynchus clarkii clarkii using a spatially explicit individual-based model. Turbidity regimes were contrasted by means of 15-year simulations in a third-order stream in northwestern California. The alternative regimes were based on multiple-year, continuous...

  9. Proxy-assisted multicasting of video streams over mobile wireless networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody

    2005-03-01

    This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.

  10. Effects of urban development on stream ecosystems in nine metropolitan study areas across the United States

    USGS Publications Warehouse

    Coles, James F.; McMahon, Gerard; Bell, Amanda H.; Brown, Larry R.; Fitzpatrick, Faith A.; Scudder Eikenberry, Barbara C.; Woodside, Michael D.; Cuffney, Thomas F.; Bryant, Wade L.; Cappiella, Karen; Fraley-McNeal, Lisa; Stack, William P.

    2012-01-01

    Which urban-related stressors are most closely linked to biological community degradation, and how can multiple stressors be managed to protect stream health as a watershed becomes increasingly urbanized?

  11. Spatially intensive sampling by electrofishing for assessing longitudinal discontinuities in fish distribution in a headwater stream

    USGS Publications Warehouse

    Le Pichon, Céline; Tales, Évelyne; Belliard, Jérôme; Torgersen, Christian E.

    2017-01-01

    Spatially intensive sampling by electrofishing is proposed as a method for quantifying spatial variation in fish assemblages at multiple scales along extensive stream sections in headwater catchments. We used this method to sample fish species at 10-m2 points spaced every 20 m throughout 5 km of a headwater stream in France. The spatially intensive sampling design provided information at a spatial resolution and extent that enabled exploration of spatial heterogeneity in fish assemblage structure and aquatic habitat at multiple scales with empirical variograms and wavelet analysis. These analyses were effective for detecting scales of periodicity, trends, and discontinuities in the distribution of species in relation to tributary junctions and obstacles to fish movement. This approach to sampling riverine fishes may be useful in fisheries research and management for evaluating stream fish responses to natural and altered habitats and for identifying sites for potential restoration.

  12. Development of habitat suitability indices for the Candy Darter, with cross-scale validation across representative populations

    USGS Publications Warehouse

    Dunn, Corey G.; Angermeier, Paul

    2016-01-01

    Understanding relationships between habitat associations for individuals and habitat factors that limit populations is a primary challenge for managers of stream fishes. Although habitat use by individuals can provide insight into the adaptive significance of selected microhabitats, not all habitat parameters will be significant at the population level, particularly when distributional patterns partially result from habitat degradation. We used underwater observation to quantify microhabitat selection by an imperiled stream fish, the Candy Darter Etheostoma osburni, in two streams with robust populations. We developed multiple-variable and multiple-life-stage habitat suitability indices (HSIs) from microhabitat selection patterns and used them to assess the suitability of available habitat in streams where Candy Darter populations were extirpated, localized, or robust. Next, we used a comparative framework to examine relationships among (1) habitat availability across streams, (2) projected habitat suitability of each stream, and (3) a rank for the likely long-term viability (robustness) of the population inhabiting each stream. Habitat selection was characterized by ontogenetic shifts from the low-velocity, slightly embedded areas used by age-0 Candy Darters to the swift, shallow areas with little fine sediment and complex substrate, which were used by adults. Overall, HSIs were strongly correlated with population rank. However, we observed weak or inverse relationships between predicted individual habitat suitability and population robustness for multiple life stages and variables. The results demonstrated that microhabitat selection by individuals does not always reflect population robustness, particularly when based on a single life stage or season, which highlights the risk of generalizing habitat selection that is observed during nonstressful periods or for noncritical resources. These findings suggest that stream fish managers may need to be cautious when implementing conservation measures based solely on observations of habitat selection by individuals and that detailed study at the individual and population levels may be necessary to identify habitat that limits populations.

  13. Voice of the Rivers: Quantifying the Sound of Rivers into Streamflow and Using the Audio for Education and Outreach

    NASA Astrophysics Data System (ADS)

    Santos, J.

    2014-12-01

    I have two goals with my research. 1. I proposed that sound recordings can be used to detect the amount of water flowing in a particular river, which could then be used to measure stream flow in rivers that have no instrumentation. My locations are in remote watersheds where hand instrumentation is the only means to collect data. I record 15 minute samples, at varied intervals, of the streams with a stereo microphone suspended above the river perpendicular to stream flow forming a "profile" of the river that can be compared to other stream-flow measurements of these areas over the course of a year. Through waveform analysis, I found a distinct voice for each river and I am quantifying the sound to track the flow based on amplitude, pitch, and wavelengths that these rivers produce. 2. Additionally, I plan to also use my DVD quality sound recordings with professional photos and HD video of these remote sites in education, outreach, and therapeutic venues. The outreach aspect of my research follows my goal of bridging communication between researchers and the public. Wyoming rivers are unique in that we export 85% of our water downstream. I would also like to take these recordings to schools, set up speakers in the four corners of a classroom and let the river flow as the teacher presents on water science. Immersion in an environment can help the learning experience of students. I have seen firsthand the power of drawing someone into an environment through sound and video. I will have my river sounds with me at AGU presented as an interactive touch-screen sound experience.

  14. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  15. Method and apparatus of prefetching streams of varying prefetch depth

    DOEpatents

    Gara, Alan [Mount Kisco, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Hoenicke, Dirk [Seebruck-Seeon, DE

    2012-01-24

    Method and apparatus of prefetching streams of varying prefetch depth dynamically changes the depth of prefetching so that the number of multiple streams as well as the hit rate of a single stream are optimized. The method and apparatus in one aspect monitor a plurality of load requests from a processing unit for data in a prefetch buffer, determine an access pattern associated with the plurality of load requests and adjust a prefetch depth according to the access pattern.

  16. Anxiety Levels Are Independently Associated With Cognitive Performance in an Australian Multiple Sclerosis Patient Cohort.

    PubMed

    Ribbons, Karen; Lea, Rodney; Schofield, Peter W; Lechner-Scott, Jeannette

    2017-01-01

    Neurological and psychological symptoms in multiple sclerosis can affect cognitive function. The objective of this study was to explore the relationship between psychological measures and cognitive performance in a patient cohort. In 322 multiple sclerosis patients, psychological symptoms were measured using the Depression Anxiety and Stress Scale, and cognitive function was evaluated using Audio Recorded Cognitive Screen. Multifactor linear regression analysis, accounting for all clinical covariates, found that anxiety was the only psychological measure to remain a significant predictor of cognitive performance (p<0.001), particularly memory function (p<0.001). Further prospective studies are required to determine whether treatment of anxiety improves cognitive impairment.

  17. Assessing the chemical contamination dynamics in a mixed land use stream system.

    PubMed

    Sonne, Anne Th; McKnight, Ursula S; Rønde, Vinni; Bjerg, Poul L

    2017-11-15

    Traditionally, the monitoring of streams for chemical and ecological status has been limited to surface water concentrations, where the dominant focus has been on general water quality and the risk for eutrophication. Mixed land use stream systems, comprising urban areas and agricultural production, are challenging to assess with multiple chemical stressors impacting stream corridors. New approaches are urgently needed for identifying relevant sources, pathways and potential impacts for implementation of suitable source management and remedial measures. We developed a method for risk assessing chemical stressors in these systems and applied the approach to a 16-km groundwater-fed stream corridor (Grindsted, Denmark). Three methods were combined: (i) in-stream contaminant mass discharge for source quantification, (ii) Toxic Units and (iii) environmental standards. An evaluation of the chemical quality of all three stream compartments - stream water, hyporheic zone, streambed sediment - made it possible to link chemical stressors to their respective sources and obtain new knowledge about source composition and origin. Moreover, toxic unit estimation and comparison to environmental standards revealed the stream water quality was substantially impaired by both geogenic and diffuse anthropogenic sources of metals along the entire corridor, while the streambed was less impacted. Quantification of the contaminant mass discharge originating from a former pharmaceutical factory revealed that several 100 kgs of chlorinated ethenes and pharmaceutical compounds discharge into the stream every year. The strongly reduced redox conditions in the plume result in high concentrations of dissolved iron and additionally release arsenic, generating the complex contaminant mixture found in the narrow discharge zone. The fingerprint of the plume was observed in the stream several km downgradient, while nutrients, inorganics and pesticides played a minor role for the stream health. The results emphasize that future investigations should include multiple compounds and stream compartments, and highlight the need for holistic approaches when risk assessing these dynamic systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Flood-frequency characteristics of Wisconsin streams

    USGS Publications Warehouse

    Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.

    2017-05-22

    Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.

  19. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  20. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    NASA Astrophysics Data System (ADS)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  1. Urban development and stream ecosystem health—Science capabilities of the U.S. Geological Survey

    USGS Publications Warehouse

    Reilly, Pamela A.; Szabo, Zoltan; Coles, James F.

    2016-04-29

    Urban development creates multiple stressors that can degrade stream ecosystems by changing stream hydrology, water quality, and physical habitat. Contaminants, habitat destruction, and increasing streamflow variability resulting from urban development have been associated with the disruption of biological communities, particularly the loss of sensitive aquatic biota. Understanding how algal, invertebrate, and fish communities respond to these physical and chemical stressors can provide important clues as to how streams should be managed to protect stream ecosystems as a watershed becomes increasingly urbanized. The U.S. Geological Survey continues to lead monitoring efforts and scientific studies on the effects of urban development on stream ecosystems in metropolitan areas across the United States.

  2. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  3. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  4. Influences of land use on leaf breakdown in Southern Appalachian headwater streams: a multiple-scale analysis

    Treesearch

    R.A. Sponseller; E.F. Benfield

    2001-01-01

    Stream ecosystems can be strongly influenced by land use within watersheds. The extent of this influence may depend on the spatial distribution of developed land and the scale at which it is evaluated. Effects of land-cover patterns on leaf breakdown were studied in 8 Southern Appalachian headwater streams. Using a GIS, land cover was evaluated at several spatial...

  5. Binding and unbinding the auditory and visual streams in the McGurk effect.

    PubMed

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  6. Extending Research on a Math Fluency Building Intervention: Applying Taped Problems in a Second-Grade Classroom

    ERIC Educational Resources Information Center

    Windingstad, Sunny; Skinner, Christopher H.; Rowland, Emily; Cardin, Elizabeth; Fearrington, Jamie Y.

    2009-01-01

    A multiple-baseline, across-tasks design was used to extend research on the taped-problems (TP) intervention with an intact, rural, second-grade classroom. During TP sessions an audio recording paced the class through a series of 15 or 16 addition facts four times. Problems and answers were read and students were instructed to attempt to provide…

  7. Kernel-Based Sensor Fusion With Application to Audio-Visual Voice Activity Detection

    NASA Astrophysics Data System (ADS)

    Dov, David; Talmon, Ronen; Cohen, Israel

    2016-12-01

    In this paper, we address the problem of multiple view data fusion in the presence of noise and interferences. Recent studies have approached this problem using kernel methods, by relying particularly on a product of kernels constructed separately for each view. From a graph theory point of view, we analyze this fusion approach in a discrete setting. More specifically, based on a statistical model for the connectivity between data points, we propose an algorithm for the selection of the kernel bandwidth, a parameter, which, as we show, has important implications on the robustness of this fusion approach to interferences. Then, we consider the fusion of audio-visual speech signals measured by a single microphone and by a video camera pointed to the face of the speaker. Specifically, we address the task of voice activity detection, i.e., the detection of speech and non-speech segments, in the presence of structured interferences such as keyboard taps and office noise. We propose an algorithm for voice activity detection based on the audio-visual signal. Simulation results show that the proposed algorithm outperforms competing fusion and voice activity detection approaches. In addition, we demonstrate that a proper selection of the kernel bandwidth indeed leads to improved performance.

  8. Multisensory Motion Perception in 3–4 Month-Old Infants

    PubMed Central

    Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara

    2017-01-01

    Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829

  9. Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat

    PubMed Central

    Zuk, Nathaniel J.; Carney, Laurel H.; Lalor, Edmund C.

    2018-01-01

    Prior research has shown that musical beats are salient at the level of the cortex in humans. Yet below the cortex there is considerable sub-cortical processing that could influence beat perception. Some biases, such as a tempo preference and an audio frequency bias for beat timing, could result from sub-cortical processing. Here, we used models of the auditory-nerve and midbrain-level amplitude modulation filtering to simulate sub-cortical neural activity to various beat-inducing stimuli, and we used the simulated activity to determine the tempo or beat frequency of the music. First, irrespective of the stimulus being presented, the preferred tempo was around 100 beats per minute, which is within the range of tempi where tempo discrimination and tapping accuracy are optimal. Second, sub-cortical processing predicted a stronger influence of lower audio frequencies on beat perception. However, the tempo identification algorithm that was optimized for simple stimuli often failed for recordings of music. For music, the most highly synchronized model activity occurred at a multiple of the beat frequency. Using bottom-up processes alone is insufficient to produce beat-locked activity. Instead, a learned and possibly top-down mechanism that scales the synchronization frequency to derive the beat frequency greatly improves the performance of tempo identification. PMID:29896080

  10. Combining multiple approaches and optimized data resolution for an improved understanding of stream temperature dynamics of a forested headwater basin in the Southern Appalachians

    NASA Astrophysics Data System (ADS)

    Belica, L.; Mitasova, H.; Caldwell, P.; McCarter, J. B.; Nelson, S. A. C.

    2017-12-01

    Thermal regimes of forested headwater streams continue to be an area of active research as climatic, hydrologic, and land cover changes can influence water temperature, a key aspect of aquatic ecosystems. Widespread monitoring of stream temperatures have provided an important data source, yielding insights on the temporal and spatial patterns and the underlying processes that influence stream temperature. However, small forested streams remain challenging to model due to the high spatial and temporal variability of stream temperatures and the climatic and hydrologic conditions that drive them. Technological advances and increased computational power continue to provide new tools and measurement methods and have allowed spatially explicit analyses of dynamic natural systems at greater temporal resolutions than previously possible. With the goal of understanding how current stream temperature patterns and processes may respond to changing landcover and hydroclimatoligical conditions, we combined high-resolution, spatially explicit geospatial modeling with deterministic heat flux modeling approaches using data sources that ranged from traditional hydrological and climatological measurements to emerging remote sensing techniques. Initial analyses of stream temperature monitoring data revealed that high temporal resolution (5 minutes) and measurement resolutions (<0.1°C) were needed to adequately describe diel stream temperature patterns and capture the differences between paired 1st order and 4th order forest streams draining north and south facing slopes. This finding along with geospatial models of subcanopy solar radiation and channel morphology were used to develop hypotheses and guide field data collection for further heat flux modeling. By integrating multiple approaches and optimizing data resolution for the processes being investigated, small, but ecologically significant differences in stream thermal regimes were revealed. In this case, multi-approach research contributed to the identification of the dominant mechanisms driving stream temperature in the study area and advanced our understanding of the current thermal fluxes and how they may change as environmental conditions change in the future.

  11. Public Health Professionals as Policy Entrepreneurs: Arkansas's Childhood Obesity Policy Experience

    PubMed Central

    Craig, Rebekah L.; Felix, Holly C.; Phillips, Martha M.

    2010-01-01

    In response to a nationwide rise in obesity, several states have passed legislation to improve school health environments. Among these was Arkansas's Act 1220 of 2003, the most comprehensive school-based childhood obesity legislation at that time. We used the Multiple Streams Framework to analyze factors that brought childhood obesity to the forefront of the Arkansas legislative agenda and resulted in the passage of Act 1220. When 3 streams (problem, policy, and political) are combined, a policy window is opened and policy entrepreneurs may advance their goals. We documented factors that produced a policy window and allowed entrepreneurs to enact comprehensive legislation. This historical analysis and the Multiple Streams Framework may serve as a roadmap for leaders seeking to influence health policy. PMID:20864715

  12. MAC, A System for Automatically IPR Identification, Collection and Distribution

    NASA Astrophysics Data System (ADS)

    Serrão, Carlos

    Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.

  13. MEG dual scanning: a procedure to study real-time auditory interaction between two persons

    PubMed Central

    Baess, Pamela; Zhdanov, Andrey; Mandel, Anne; Parkkonen, Lauri; Hirvenkari, Lotta; Mäkelä, Jyrki P.; Jousmäki, Veikko; Hari, Riitta

    2012-01-01

    Social interactions fill our everyday life and put strong demands on our brain function. However, the possibilities for studying the brain basis of social interaction are still technically limited, and even modern brain imaging studies of social cognition typically monitor just one participant at a time. We present here a method to connect and synchronize two faraway neuromagnetometers. With this method, two participants at two separate sites can interact with each other through a stable real-time audio connection with minimal delay and jitter. The magnetoencephalographic (MEG) and audio recordings of both laboratories are accurately synchronized for joint offline analysis. The concept can be extended to connecting multiple MEG devices around the world. As a proof of concept of the MEG-to-MEG link, we report the results of time-sensitive recordings of cortical evoked responses to sounds delivered at laboratories separated by 5 km. PMID:22514530

  14. High-speed network for delivery of education-on-demand

    NASA Astrophysics Data System (ADS)

    Cordero, Carlos; Harris, Dale; Hsieh, Jeff

    1996-03-01

    A project to investigate the feasibility of delivering on-demand distance education to the desktop, known as the Asynchronous Distance Education ProjecT (ADEPT), is presently being carried out. A set of Stanford engineering classes is digitized on PC, Macintosh, and UNIX platforms, and is made available on servers. Students on campus and in industry may then access class material on these servers via local and metropolitan area networks. Students can download class video and audio, encoded in QuickTimeTM and Show-Me TVTM formats, via file-transfer protocol or the World Wide Web. Alternatively, they may stream a vector-quantized version of the class directly from a server for real-time playback. Students may also download PostscriptTM and Adobe AcrobatTM versions of class notes. Off-campus students may connect to ADEPT servers via the internet, the Silicon Valley Test Track (SVTT), or the Bay-Area Gigabit Network (BAGNet). The SVTT and BAGNet are high-speed metropolitan-area networks, spanning the Bay Area, which provide IP access over asynchronous transfer mode (ATM). Student interaction is encouraged through news groups, electronic mailing lists, and an ADEPT home page. Issues related to having multiple platforms and interoperability are examined in this paper. The ramifications of providing a reliable service are discussed. System performance and the parameters that affect it are then described. Finally, future work on expanding ATM access, real-time delivery of classes, and enhanced student interaction is described.

  15. Space Operations Learning Center

    NASA Technical Reports Server (NTRS)

    Lui, Ben; Milner, Barbara; Binebrink, Dan; Kuok, Heng

    2012-01-01

    The Space Operations Learning Center (SOLC) is a tool that provides an online learning environment where students can learn science, technology, engineering, and mathematics (STEM) through a series of training modules. SOLC is also an effective media for NASA to showcase its contributions to the general public. SOLC is a Web-based environment with a learning platform for students to understand STEM through interactive modules in various engineering topics. SOLC is unique in its approach to develop learning materials to teach schoolaged students the basic concepts of space operations. SOLC utilizes the latest Web and software technologies to present this educational content in a fun and engaging way for all grade levels. SOLC uses animations, streaming video, cartoon characters, audio narration, interactive games and more to deliver educational concepts. The Web portal organizes all of these training modules in an easily accessible way for visitors worldwide. SOLC provides multiple training modules on various topics. At the time of this reporting, seven modules have been developed: Space Communication, Flight Dynamics, Information Processing, Mission Operations, Kids Zone 1, Kids Zone 2, and Save The Forest. For the first four modules, each contains three components: Flight Training, Flight License, and Fly It! Kids Zone 1 and 2 include a number of educational videos and games designed specifically for grades K-6. Save The Forest is a space operations mission with four simulations and activities to complete, optimized for new touch screen technology. The Kids Zone 1 module has recently been ported to Facebook to attract wider audience.

  16. Audio in Courseware: Design Knowledge Issues.

    ERIC Educational Resources Information Center

    Aarntzen, Diana

    1993-01-01

    Considers issues that need to be addressed when incorporating audio in courseware design. Topics discussed include functions of audio in courseware; the relationship between auditive and visual information; learner characteristics in relation to audio; events of instruction; and audio characteristics, including interactivity and speech technology.…

  17. A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Shrum, Richard; Miller, Joel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Our work in virtual reality systems at NASA Ames Research Center includes the area of aurally-guided visual search, using specially-designed audio cues and spatial audio processing (also known as virtual or "3-D audio") techniques (Begault, 1994). Previous studies at Ames had revealed that use of 3-D audio for Traffic Collision Avoidance System (TCAS) advisories significantly reduced head-down time, compared to a head-down map display (0.5 sec advantage) or no display at all (2.2 sec advantage) (Begault, 1993, 1995; Begault & Pittman, 1994; see Wenzel, 1994, for an audio demo). Since the crew must keep their head up and looking out the window as much as possible when taxiing under low-visibility conditions, and the potential for "blunder" is increased under such conditions, it was sensible to evaluate the audio spatial cueing for a prototype audio ground collision avoidance warning (GCAW) system, and a 3-D audio guidance system. Results were favorable for GCAW, but not for the audio guidance system.

  18. The priming function of in-car audio instruction.

    PubMed

    Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh

    2018-05-01

    Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.

  19. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.

  20. The CloudBoard Research Platform: an interactive whiteboard for corporate users

    NASA Astrophysics Data System (ADS)

    Barrus, John; Schwartz, Edward L.

    2013-03-01

    Over one million interactive whiteboards (IWBs) are sold annually worldwide, predominantly for classroom use with few sales for corporate use. Unmet needs for IWB corporate use were investigated and the CloudBoard Research Platform (CBRP) was developed to investigate and test technology for meeting these needs. The CBRP supports audio conferencing with shared remote drawing activity, casual capture of whiteboard activity for long-term storage and retrieval, use of standard formats such as PDF for easy import of documents via the web and email and easy export of documents. Company RFID badges and key fobs provide secure access to documents at the board and automatic logout occurs after a period of inactivity. Users manage their documents with a web browser. Analytics and remote device management is provided for administrators. The IWB hardware consists of off-the-shelf components (a Hitachi UST Projector, SMART Technologies, Inc. IWB hardware, Mac Mini, Polycom speakerphone, etc.) and a custom occupancy sensor. The three back-end servers provide the web interface, document storage, stroke and audio streaming. Ease of use, security, and robustness sufficient for internal adoption was achieved. Five of the 10 boards installed at various Ricoh sites have been in daily or weekly use for the past year and total system downtime was less than an hour in 2012. Since CBRP was installed, 65 registered users, 9 of whom use the system regularly, have created over 2600 documents.

  1. Construction and updating of event models in auditory event processing.

    PubMed

    Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-02-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech.

    PubMed

    Crosse, Michael J; Lalor, Edmund C

    2014-04-01

    Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.

  3. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  4. Methods of natural gas liquefaction and natural gas liquefaction plants utilizing multiple and varying gas streams

    DOEpatents

    Wilding, Bruce M; Turner, Terry D

    2014-12-02

    A method of natural gas liquefaction may include cooling a gaseous NG process stream to form a liquid NG process stream. The method may further include directing the first tail gas stream out of a plant at a first pressure and directing a second tail gas stream out of the plant at a second pressure. An additional method of natural gas liquefaction may include separating CO.sub.2 from a liquid NG process stream and processing the CO.sub.2 to provide a CO.sub.2 product stream. Another method of natural gas liquefaction may include combining a marginal gaseous NG process stream with a secondary substantially pure NG stream to provide an improved gaseous NG process stream. Additionally, a NG liquefaction plant may include a first tail gas outlet, and at least a second tail gas outlet, the at least a second tail gas outlet separate from the first tail gas outlet.

  5. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  6. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  7. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  8. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  9. SIMULATING SUB-DECADAL CHANNEL MORPHOLOGIC CHANGE IN EPHEMERAL STREAM NETWORKS

    EPA Science Inventory

    A distributed watershed model was modified to simulate cumulative channel morphologic
    change from multiple runoff events in ephemeral stream networks. The model incorporates the general design of the event-based Kinematic Runoff and" Erosion Model (KINEROS), which describes t...

  10. Continuous monitoring reveals multiple controls on ecosystem metabolism in a suburban stream.

    EPA Science Inventory

    Ecosystem metabolism is an important mechanism for nutrient retention in streams, yet few high studies have investigated temporal patterns in gross primary production (GPP) and ecosystem respiration (ER) using high frequency measurements. This is a potentially important oversig...

  11. Modeling Flow and Pollutant Transport in a Karst Watershed with SWAT

    USDA-ARS?s Scientific Manuscript database

    Karst hydrology is characterized by multiple springs, sinkholes, and losing streams resulting from acidic water percolating through limestone. These features provide direct connections between surface water and groundwater and increase the risk of groundwater, springs and stream contamination. Anthr...

  12. Focus on the post-DVD formats

    NASA Astrophysics Data System (ADS)

    He, Hong; Wei, Jingsong

    2005-09-01

    As the digital TV(DTV) technologies are developing rapidly on its standard system, hardware desktop, software model, and interfaces between DTV and the home net, High Definition TV (HDTV) program worldwide broadcasting is scheduled. Enjoying high quality TV program at home is not a far-off dream for people. As for the main recording media, what would the main stream be for the optical storage technology to meet the HDTV requirements is becoming a great concern. At present, there are a few kinds of Post-DVD formats which are competing on technology, standard and market. Here we give a review on the co-existing Post-DVD formats in the world. We will discuss on the basic parameters for optical disk, video /audio coding strategy and system performance for HDTV program.

  13. KSC-2012-5017

    NASA Image and Video Library

    2012-09-06

    CAPE CANAVERAL, Fla. – During NASA's Innovation Expo at the Kennedy Space Center in Florida, William Merrill, of NASA's Communications Infrastructure Services Division, proposes an innovation that would make mission audio available by way of an Internet radio stream. Kennedy Kick-Start Chair Mike Conroy looks on from the left. As Kennedy continues developing programs and infrastructure to become a 21st century spaceport, many employees are devising ways to do their jobs better and more efficiently. On Sept. 6, 2012, 16 Kennedy employees pitched their innovative ideas for improving the center at the Kennedy Kick-Start event. The competition was part of a center-wide effort designed to increase exposure for innovative ideas and encourage their implementation. For more information, visit http://www.nasa.gov/centers/kennedy/news/kick-start_competition.html Photo credit: NASA/Gianni Woods

  14. Maximizing ship-to-shore connections via telepresence technologies

    NASA Astrophysics Data System (ADS)

    Fundis, A. T.; Kelley, D. S.; Proskurowski, G.; Delaney, J. R.

    2012-12-01

    Live connections to offshore oceanographic research via telepresence technologies enable onshore scientists, students, and the public to observe and participate in active research as it is happening. As part of the ongoing construction effort of the NSF's Ocean Observatories Initiative's cabled network, the VISIONS'12 expedition included a wide breadth of activities to allow the public, students, and scientists to interact with a sea-going expedition. Here we describe our successes and lessons learned in engaging these onshore audiences through the various outreach efforts employed during the expedition including: 1) live high-resolution video and audio streams from the seafloor and ship; 2) live connections to science centers, aquaria, movie theaters, and undergraduate classrooms; 3) social media interactions; and 4) an onboard immersion experience for undergraduate and graduate students.

  15. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  16. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  17. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  18. Behavioral Modeling of Adversaries with Multiple Objectives in Counterterrorism.

    PubMed

    Mazicioglu, Dogucan; Merrick, Jason R W

    2018-05-01

    Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions. © 2017 Society for Risk Analysis.

  19. Mapping longitudinal stream connectivity in the North St. Vrain Creek watershed of Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohl, Ellen; Rathburn, Sara; Chignell, Stephen

    We use reach-scale stream gradient as an indicator of longitudinal connectivity for water, sediment, and organic matter in a mountainous watershed in Colorado. Stream reaches with the highest gradient tend to have narrow valley bottoms with limited storage space and attenuation of downstream fluxes, whereas stream reaches with progressively lower gradients have progressively more storage and greater attenuation. We compared the distribution of stream gradient to stream-reach connectivity rankings that incorporated multiple potential control variables, including lithology, upland vegetation, hydroclimatology, road crossings, and flow diversions. We then assessed connectivity rankings using different weighting schemes against stream gradient and against field-basedmore » understanding of relative connectivity within the watershed. Here, we conclude that stream gradient, which is simple to map using publicly available data and digital elevation models, is the most robust indicator of relative longitudinal connectivity within the river network.« less

  20. Mapping longitudinal stream connectivity in the North St. Vrain Creek watershed of Colorado

    DOE PAGES

    Wohl, Ellen; Rathburn, Sara; Chignell, Stephen; ...

    2016-05-06

    We use reach-scale stream gradient as an indicator of longitudinal connectivity for water, sediment, and organic matter in a mountainous watershed in Colorado. Stream reaches with the highest gradient tend to have narrow valley bottoms with limited storage space and attenuation of downstream fluxes, whereas stream reaches with progressively lower gradients have progressively more storage and greater attenuation. We compared the distribution of stream gradient to stream-reach connectivity rankings that incorporated multiple potential control variables, including lithology, upland vegetation, hydroclimatology, road crossings, and flow diversions. We then assessed connectivity rankings using different weighting schemes against stream gradient and against field-basedmore » understanding of relative connectivity within the watershed. Here, we conclude that stream gradient, which is simple to map using publicly available data and digital elevation models, is the most robust indicator of relative longitudinal connectivity within the river network.« less

  1. The power of digital audio in interactive instruction: An unexploited medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, J.; Trainor, M.

    1989-01-01

    Widespread use of audio in computer-based training (CBT) occurred with the advent of the interactive videodisc technology. This paper discusses the alternative of digital audio, which, unlike videodisc audio, enables one to rapidly revise the audio used in the CBT and which may be used in nonvideo CBT applications as well. We also discuss techniques used in audio script writing, editing, and production. Results from evaluations indicate a high degree of user satisfaction. 4 refs.

  2. Making decisions in complex landscapes: Headwater stream management across multiple federal agencies

    USGS Publications Warehouse

    Katz, Rachel; Grant, Evan H. Campbell; Runge, Michael C.; Connery, Bruce; Crockett, Marquette; Herland, Libby; Johnson, Sheela; Kirk, Dawn; Wofford, Jeb; Bennett, Rick; Nislow, Keith; Norris, Marian; Hocking, Daniel; Letcher, Benjamin; Roy, Allison

    2014-01-01

    Headwater stream ecosystems are vulnerable to numerous threats associated with climate and land use change. In the northeastern US, many headwater stream species (e.g., brook trout and stream salamanders) are of special conservation concern and may be vulnerable to climate change influences, such as changes in stream temperature and streamflow. Federal land management agencies (e.g., US Fish and Wildlife Service, National Park Service, USDA Forest Service, Bureau of Land Management and Department of Defense) are required to adopt policies that respond to climate change and may have longer-term institutional support to enforce such policies compared to state, local, non-governmental, or private land managers. However, federal agencies largely make management decisions in regards to headwater stream ecosystems independently. This fragmentation of management resources and responsibilities across the landscape may significantly impede the efficiency and effectiveness of conservation actions, and higher degrees of collaboration may be required to achieve conservation goals. This project seeks to provide an example of cooperative landscape decision-making to address the conservation of headwater stream ecosystems. We identified shared and contrasting objectives of each federal agency and potential collaboration opportunities that may increase efficient and effective management of headwater stream ecosystems in two northeastern US watersheds. These workshops provided useful insights into the adaptive capacity of federal institutions to address threats to headwater stream ecosystems. Our ultimate goal is to provide a decision-making framework and analysis that addresses large-scale conservation threats across multiple stakeholders, as a demonstration of cooperative landscape conservation for aquatic ecosystems. Additionally, we aim to provide new scientific knowledge and a regional perspective to resource managers to help inform local management decisions.

  3. A catchment scale evaluation of multiple stressor effects in headwater streams.

    PubMed

    Rasmussen, Jes J; McKnight, Ursula S; Loinaz, Maria C; Thomsen, Nanna I; Olsson, Mikael E; Bjerg, Poul L; Binning, Philip J; Kronvang, Brian

    2013-01-01

    Mitigation activities to improve water quality and quantity in streams as well as stream management and restoration efforts are conducted in the European Union aiming to improve the chemical, physical and ecological status of streams. Headwater streams are often characterised by impairment of hydromorphological, chemical, and ecological conditions due to multiple anthropogenic impacts. However, they are generally disregarded as water bodies for mitigation activities in the European Water Framework Directive despite their importance for supporting a higher ecological quality in higher order streams. We studied 11 headwater streams in the Hove catchment in the Copenhagen region. All sites had substantial physical habitat and water quality impairments due to anthropogenic influence (intensive agriculture, urban settlements, contaminated sites and low base-flow due to water abstraction activities in the catchment). We aimed to identify the dominating anthropogenic stressors at the catchment scale causing ecological impairment of benthic macroinvertebrate communities and provide a rank-order of importance that could help in prioritising mitigation activities. We identified numerous chemical and hydromorphological impacts of which several were probably causing major ecological impairments, but we were unable to provide a robust rank-ordering of importance suggesting that targeted mitigation efforts on single anthropogenic stressors in the catchment are unlikely to have substantial effects on the ecological quality in these streams. The SPEcies At Risk (SPEAR) index explained most of the variability in the macroinvertebrate community structure, and notably, SPEAR index scores were often very low (<10% SPEAR abundance). An extensive re-sampling of a subset of the streams provided evidence that especially insecticides were probably essential contributors to the overall ecological impairment of these streams. Our results suggest that headwater streams should be considered in future management and mitigation plans. Catchment-based management is necessary because several anthropogenic stressors exceeded problematic thresholds, suggesting that more holistic approaches should be preferred. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  5. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  6. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  7. Communicative Competence in Audio Classrooms: A Position Paper for the CADE 1991 Conference.

    ERIC Educational Resources Information Center

    Burge, Liz

    Classroom practitioners need to move their attention away from the technological and logistical competencies required for audio conferencing (AC) to the required communicative competencies in order to advance their skills in handling the psychodynamics of audio virtual classrooms which include audio alone and audio with graphics. While the…

  8. The Audio Description as a Physics Teaching Tool

    ERIC Educational Resources Information Center

    Cozendey, Sabrina; Costa, Maria da Piedade

    2016-01-01

    This study analyses the use of audio description in teaching physics concepts, aiming to determine the variables that influence the understanding of the concept. One education resource was audio described. For make the audio description the screen was freezing. The video with and without audio description should be presented to students, so that…

  9. Modeling stream network-scale variation in coho salmon overwinter survival and smolt size

    EPA Science Inventory

    We used multiple regression and hierarchical mixed-effects models to examine spatial patterns of overwinter survival and size at smolting in juvenile coho salmon Oncorhynchus kisutch in relation to habitat attributes across an extensive stream network in southwestern Oregon over ...

  10. Improving LUC estimation accuracy with multiple classification system for studying impact of urbanization on watershed flood

    NASA Astrophysics Data System (ADS)

    Dou, P.

    2017-12-01

    Guangzhou has experienced a rapid urbanization period called "small change in three years and big change in five years" since the reform of China, resulting in significant land use/cover changes(LUC). To overcome the disadvantages of single classifier for remote sensing image classification accuracy, a multiple classifier system (MCS) is proposed to improve the quality of remote sensing image classification. The new method combines advantages of different learning algorithms, and achieves higher accuracy (88.12%) than any single classifier did. With the proposed MCS, land use/cover (LUC) on Landsat images from 1987 to 2015 was obtained, and the LUCs were used on three watersheds (Shijing river, Chebei stream, and Shahe stream) to estimate the impact of urbanization on water flood. The results show that with the high accuracy LUC, the uncertainty in flood simulations are reduced effectively (for Shijing river, Chebei stream, and Shahe stream, the uncertainty reduced 15.5%, 17.3% and 19.8% respectively).

  11. Temporal Context in Speech Processing and Attentional Stream Selection: A Behavioral and Neural perspective

    PubMed Central

    Zion Golumbic, Elana M.; Poeppel, David; Schroeder, Charles E.

    2012-01-01

    The human capacity for processing speech is remarkable, especially given that information in speech unfolds over multiple time scales concurrently. Similarly notable is our ability to filter out of extraneous sounds and focus our attention on one conversation, epitomized by the ‘Cocktail Party’ effect. Yet, the neural mechanisms underlying on-line speech decoding and attentional stream selection are not well understood. We review findings from behavioral and neurophysiological investigations that underscore the importance of the temporal structure of speech for achieving these perceptual feats. We discuss the hypothesis that entrainment of ambient neuronal oscillations to speech’s temporal structure, across multiple time-scales, serves to facilitate its decoding and underlies the selection of an attended speech stream over other competing input. In this regard, speech decoding and attentional stream selection are examples of ‘active sensing’, emphasizing an interaction between proactive and predictive top-down modulation of neuronal dynamics and bottom-up sensory input. PMID:22285024

  12. Vegetation Structure and Function along Ephemeral Streams in the Sonoran Desert

    NASA Astrophysics Data System (ADS)

    Stromberg, J. C.; Katz, G.

    2011-12-01

    Despite being the most prevalent stream type in the American Southwest, far less is known about riparian ecosystems associated with ephemeral streams than with perennial streams. Patterns of plant composition and structure reflect complex environmental gradients, including water availability and flood intensity, which in turn are related to position in the stream network. A survey of washes in the Sonoran Desert near Tucson, Arizona showed species composition of small ephemeral washes to be comprised largely of upland species, including large seeded shrubs such as Acacia spp. and Larrea tridentata. Small seeded disturbance adapted xerophytic shrubs, such as Baccharis sarothroides, Hymenoclea monogyra and Isocoma tenuisecta, were common lower in the stream network on the larger streams that have greater scouring forces. Because ephemeral streams have multiple water sources, including deep (sometimes perched) water tables and seasonally variable rain and flood pulses, multiple plant functional types co-exist within a stream segment. Deep-rooted phreatophytes, including Tamarix and nitrogen-fixing Prosopis, are common on many washes. Such plants are able to access not only water, but also pools of nutrients, several meters below ground thereby affecting nutrient levels and soil moisture content in various soil strata. In addition to the perennial plants, many opportunistic and shallow-rooted annual species establish during the bimodal wet seasons. Collectively, wash vegetation serves to stabilize channel substrates and promote accumulation of fine sediments and organic matter. In addition to the many streams that are ephemeral over their length, ephemeral reaches also occupy extensive sections of interrupted perennial rivers. The differences in hydrologic conditions that occur over the length of interrupted perennial rivers influence plant species diversity and variability through time. In one study of three interrupted perennial rivers, patterns of herbaceous species richness varied with temporal scale of analysis, with richness being greater at perennial sites over the short-term but greater at non-perennial sites over the long-term (multiple seasons and years). This latter pattern arose owing to the abundance of light, space, and bare ground at the drier sites, combined with a diverse soil seed bank and periodic supply of seasonal soil moisture sufficient to stimulate establishment of cool-season as well as warm-season annuals. The reduced availability of perennial water sources limits the richness, cover, and competitive dominance of herbaceous perennial species, enabling pronounced diversity response to episodic water pulses in the drier river segments. Thus, non-perennial streams and reaches contribute importantly to river-wide and landscape scale desert riparian diversity, supporting high cumulative richness and distinct composition compared to perennial flow reaches.

  13. Ultraino: An Open Phased-Array System for Narrowband Airborne Ultrasound Transmission.

    PubMed

    Marzo, Asier; Corkett, Tom; Drinkwater, Bruce W

    2018-01-01

    Modern ultrasonic phased-array controllers are electronic systems capable of delaying the transmitted or received signals of multiple transducers. Configurable transmit-receive array systems, capable of electronic steering and shaping of the beam in near real-time, are available commercially, for example, for medical imaging. However, emerging applications, such as ultrasonic haptics, parametric audio, or ultrasonic levitation, require only a small subset of the capabilities provided by the existing controllers. To meet this need, we present Ultraino, a modular, inexpensive, and open platform that provides hardware, software, and example applications specifically aimed at controlling the transmission of narrowband airborne ultrasound. Our system is composed of software, driver boards, and arrays that enable users to quickly and efficiently perform research in various emerging applications. The software can be used to define array geometries, simulate the acoustic field in real time, and control the connected driver boards. The driver board design is based on an Arduino Mega and can control 64 channels with a square wave of up to 17 Vpp and /5 phase resolution. Multiple boards can be chained together to increase the number of channels. The 40-kHz arrays with flat and spherical geometries are demonstrated for parametric audio generation, acoustic levitation, and haptic feedback.

  14. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  15. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  16. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  17. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  18. Twin Jet

    NASA Technical Reports Server (NTRS)

    Henderson, Brenda; Bozak, Rick

    2010-01-01

    Many subsonic and supersonic vehicles in the current fleet have multiple engines mounted near one another. Some future vehicle concepts may use innovative propulsion systems such as distributed propulsion which will result in multiple jets mounted in close proximity. Engine configurations with multiple jets have the ability to exploit jet-by-jet shielding which may significantly reduce noise. Jet-by-jet shielding is the ability of one jet to shield noise that is emitted by another jet. The sensitivity of jet-by-jet shielding to jet spacing and simulated flight stream Mach number are not well understood. The current experiment investigates the impact of jet spacing, jet operating condition, and flight stream Mach number on the noise radiated from subsonic and supersonic twin jets.

  19. Fault-Tolerant and Elastic Streaming MapReduce with Decentralized Coordination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Frincu, Marc; Simmhan, Yogesh

    2015-06-29

    The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) extend this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that inturn leads to fluctuations in the Quality of the Service (QoS); and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence ofmore » resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2:8 improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 -1500 ms from multiple failures.« less

  20. Increasingly complex representations of natural movies across the dorsal stream are shared between subjects.

    PubMed

    Güçlü, Umut; van Gerven, Marcel A J

    2017-01-15

    Recently, deep neural networks (DNNs) have been shown to provide accurate predictions of neural responses across the ventral visual pathway. We here explore whether they also provide accurate predictions of neural responses across the dorsal visual pathway, which is thought to be devoted to motion processing and action recognition. This is achieved by training deep neural networks to recognize actions in videos and subsequently using them to predict neural responses while subjects are watching natural movies. Moreover, we explore whether dorsal stream representations are shared between subjects. In order to address this question, we examine if individual subject predictions can be made in a common representational space estimated via hyperalignment. Results show that a DNN trained for action recognition can be used to accurately predict how dorsal stream responds to natural movies, revealing a correspondence in representations of DNN layers and dorsal stream areas. It is also demonstrated that models operating in a common representational space can generalize to responses of multiple or even unseen individual subjects to novel spatio-temporal stimuli in both encoding and decoding settings, suggesting that a common representational space underlies dorsal stream responses across multiple subjects. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  2. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: A multi-scale simulation analysis

    EPA Science Inventory

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale me...

  3. Importance of environmental factors on the richness and distribution of benthic macroinvertebrates in tropical headwater streams

    EPA Science Inventory

    It is essential to understand the interactions between local environmental factors (e.g., physical habitat and water quality) and aquatic assemblages to conserve biodiversity in tropical and subtropical headwater streams. Therefore, we evaluated the relative importance of multipl...

  4. SELECTING LEAST-DISTURBED SURVEY SITES FOR GREAT PLAINS STREAMS AND RIVERS

    EPA Science Inventory

    True reference condition probably does not exist for streams in highly utilized regions such as the Great Plains. Selecting least-disturbed sites for large regions is confounded by the association between human uses and natural gradients, and by multiple kinds of disturbance. U...

  5. Possibilities and Challenges for Modeling Flow and Pollutant Transport in a Karst Watershed with SWAT

    USDA-ARS?s Scientific Manuscript database

    Karst hydrology is characterized by multiple springs, sinkholes, and losing streams resulting from acidic water percolating through limestone. These features provide direct connections between surface water and groundwater and increase the risk of groundwater, spring and stream contamination. Anthro...

  6. Magnetic separator having a multilayer matrix, method and apparatus

    DOEpatents

    Kelland, David R.

    1980-01-01

    A magnetic separator having multiple staggered layers of porous magnetic material positioned to intercept a fluid stream carrying magnetic particles and so placed that a bypass of each layer is effected as the pores of the layer become filled with material extracted from the fluid stream.

  7. Comparing Audio and Video Data for Rating Communication

    PubMed Central

    Williams, Kristine; Herman, Ruth; Bontempo, Daniel

    2013-01-01

    Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with ICC (2,1) for audio = .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio recorded data should be evaluated in designing studies evaluating nursing care. PMID:23579475

  8. Present State and Prospects for the Meteor Research in Ukraine

    NASA Astrophysics Data System (ADS)

    Shulga, O.; Voloshchuk, Y.; Kolomiyets, S.; Cherkas, Y.; Kimakovskay, I.; Kimakovsky, S.; Knyazkova, E.; Kozyryev, Y.; Sybiryakova, Y.; Gorbanev, Y.; Stogneeva, I.; Shestopalov, V.; Kozak, P.; Rozhilo, O.; Taranukha, Y.

    2015-03-01

    ODESSA. Systematical study of the meteor events are being carried out since 1953. In 2003 complete modernization of the observing technique was performed, and TV gmeteor patrolh on the base of WATEC LCL902 cameras was created. @ wide variety of mounts and objectives are used: from Schmidt telescope F = 540 mm, F/D = 2.25 (field of view FOV = (0.68x0.51) deg, star limiting magnitude SLM = 13.5 mag, star astrometric accuracy 1-2 arcsec) up to Fisheye lenses F = 8 mm, F/D = 3.5 (FOV = (36x49) deg, SLM = 7 mag). The database of observations that was collected between 2003 and 2012 consists of 6176 registered meteor events. Observational programs on basis and non-basis observations in Odessa (Kryzhanovka station) and Zmeiny island are presented. Software suite of 12 programs was created for processing of meteor TV observations. It enables one to carry out the whole cycle of data processing: from image preprocessing up to orbital elements determination. Major meteor particles research directions: statistic, areas of streams, precise stream radiant, orbit elements, phenomena physics, flare appearance, wakes, afterglow, chemistry and density. KYIV. The group of meteor investigations has been functioning more than twenty years. The observations are carried out simultaneously from two points placed at the distance of 54 km. Super-isocon low light camera tubes are used with photo lens: F = 50mm, F/D = 1.5 (FOV = (23.5 x 19.0) deg, SLM = 9.5 mag), or F = 85, F/D = 1.5 (FOV = (13x11) deg, SLM = 11.5 mag). Astrometry, photometry, calculation of meteor trajectory in Earth atmosphere and computation of heliocentric orbit are realized in developed gFalling Starh software. KHARKOV. Meteor radio-observations have begun in 1957. In 1972, the radiolocation system MARS designed for automatic meteor registration was recognized as being the most sensitive system in the world. With the help of this system 250 000 faint meteors (up to 12 mag) were registered between 1972 and 1978 (frequency 31.1 MHz, particle masses 10-3 ~ 10-6 g). Simultaneously, millions of reflections were registered for even fainter meteors (up to 14 mag). Information about 250 000 meteors and 5160 meteor streams is included in database. This is an unique material that can be used for hypotheses testing, as well as for creation new theories about meteor phenomena. Models of the meteor matter distribution in the Earthfs atmosphere, near-Earth space and in the Solar system, influence on surface of spacecrafts were developed. NIKOLAEV. The optical and radio observations of meteors have begun in 2011. Two WATEC LCL902 cameras are used with photo lens F = 85 mm, F/D = 1.8 (FOV = (3.2x4.3), SLM = 12 mag, star astrometric accuracy 1-6 arcsec). Original software was developed for automatic on-line detection of meteor in video stream. During 2011 year 105 meteor events were registered (with angular length (0.5-4.5) deg and brightness (1-5) mag). Error of determination of the meteor trajectory arc ~ (10-12) arcsec. Error of determination of the large circle pole of the meteor trajectory is ~ (3-13) arcmin. In the radio band observations of meteors are performed by registration of signal reflected from the meteor wake. As a signal source the over-the-horizon FM station in Kielce (Poland) is used. Narrow-beam antenna, computer with TV/FM tuner and audio recording software are used to perform radio observations. Original software was developed for automatic detection of meteor in audio stream.

  9. Development of Techniques for Multiple Data Stream Analysis and Short- Term Forecasting. Volume I. Multiple Data Stream Analysis

    DTIC Science & Technology

    1975-11-15

    ir in» l.iit.ii-nlrl-i . i ifr .-Viii ^„,„^>,,,,,.,,.,„.,,,.,™„„,^^^^ I ’Ulis Cable shows great similarity between the NYT and TOL as follows; o...from which the data have been derived. The authors challenge the contention by other data collectors that variation in interaction data derived from...LIC Luxemburg LUX Malagasy MAG Malawi MAW Malaysia MAL Maldive MAD Mali MLI Malta MLT Mauritius MAR Mauritania MAU Mexico MEX Monaco MOC

  10. Predicting the Overall Spatial Quality of Automotive Audio Systems

    NASA Astrophysics Data System (ADS)

    Koya, Daisuke

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.

  11. Exploring the Implementation of Steganography Protocols on Quantum Audio Signals

    NASA Astrophysics Data System (ADS)

    Chen, Kehan; Yan, Fei; Iliyasu, Abdullah M.; Zhao, Jianping

    2018-02-01

    Two quantum audio steganography (QAS) protocols are proposed, each of which manipulates or modifies the least significant qubit (LSQb) of the host quantum audio signal that is encoded as an FRQA (flexible representation of quantum audio) audio content. The first protocol (i.e. the conventional LSQb QAS protocol or simply the cLSQ stego protocol) is built on the exchanges between qubits encoding the quantum audio message and the LSQb of the amplitude information in the host quantum audio samples. In the second protocol, the embedding procedure to realize it implants information from a quantum audio message deep into the constraint-imposed most significant qubit (MSQb) of the host quantum audio samples, we refer to it as the pseudo MSQb QAS protocol or simply the pMSQ stego protocol. The cLSQ stego protocol is designed to guarantee high imperceptibility between the host quantum audio and its stego version, whereas the pMSQ stego protocol ensures that the resulting stego quantum audio signal is better immune to illicit tampering and copyright violations (a.k.a. robustness). Built on the circuit model of quantum computation, the circuit networks to execute the embedding and extraction algorithms of both QAS protocols are determined and simulation-based experiments are conducted to demonstrate their implementation. Outcomes attest that both protocols offer promising trade-offs in terms of imperceptibility and robustness.

  12. Disentangling the pathways of land use impacts on the functional structure of fish assemblages in Amazon streams

    EPA Science Inventory

    Agricultural land use is a primary driver of environmental impacts on streams. However, the causal processes that shape these impacts operate through multiple pathways and at several spatial scales. This complexity undermines the development of more effective management approache...

  13. REGRESSION MODELS THAT RELATE STREAMS TO WATERSHEDS: COPING WITH NUMEROUS, COLLINEAR PEDICTORS

    EPA Science Inventory

    GIS efforts can produce a very large number of watershed variables (climate, land use/land cover and topography, all defined for multiple areas of influence) that could serve as candidate predictors in a regression model of reach-scale stream features. Invariably, many of these ...

  14. Climate and Land-Cover Change Impacts on Stream Flow in the Southwest U.S.

    EPA Science Inventory

    Vegetation change in arid and semi-arid climatic regions of the American West are a primary concern in sustaining key ecosystem services such as clean, reliable water sources for multiple uses. Land cover and climate change impacts on stream flow were investigated in a southeast ...

  15. The Psychophysics of Contingency Assessment

    ERIC Educational Resources Information Center

    Allan, Lorraine G.; Hannah, Samuel D.; Crump, Matthew J. C.; Siegel, Shepard

    2008-01-01

    The authors previously described a procedure that permits rapid, multiple within-participant evaluations of contingency assessment (the "streamed-trial" procedure, M. J. C. Crump, S. D. Hannah, L. G. Allan, & L. K. Hord, 2007). In the present experiments, they used the streamed-trial procedure, combined with the method of constant stimuli and a…

  16. Comparing audio and video data for rating communication.

    PubMed

    Williams, Kristine; Herman, Ruth; Bontempo, Daniel

    2013-09-01

    Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with Interclass Correlation Coefficient (ICC) (2,1) for audio .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio-recorded data should be evaluated in designing studies evaluating nursing care.

  17. Influence of forest management on headwater stream amphibians at multiple spatial scales

    USGS Publications Warehouse

    Stoddard, Margo; Hayes, John P.; Erickson, Janet L.

    2004-01-01

    Background Amphibians are important components of headwater streams in forest ecosystems of the Pacific Northwest (PNW). They comprise the highest vertebrate biomass and density in these systems and are integral to trophic dynamics both as prey and as predators. The most commonly encountered amphibians in PNW headwater streams include the Pacific giant salamander (Dicamptodon tenebrosus), the tailed frog (Ascaphus truei), the southern torrent salamander (Rhyacotriton variegatus), and the Columbia torrent salamander (R. kezeri).

  18. Occurrence, leaching, and degradation of Cry1Ab protein from transgenic maize detritus in agricultural streams

    DOE PAGES

    Griffiths, Natalie A.; Tank, Jennifer L.; Royer, Todd V.; ...

    2017-03-15

    The insecticidal Cry1Ab protein expressed by transgenic (Bt) maize can enter adjacent water bodies via multiple pathways, but its fate in stream ecosystems is not as well studied as in terrestrial systems. In this study, we used a combination of field sampling and laboratory experiments to examine the occurrence, leaching, and degradation of soluble Cry1Ab protein derived from Bt maize in agricultural streams. We surveyed 11 agricultural streams in northwestern Indiana, USA, on 6 dates that encompassed the growing season, crop harvest, and snowmelt/spring flooding, and detected Cry1Ab protein in the water column and in flowing subsurface tile drains atmore » concentrations of 3–60 ng/L. In a series of laboratory experiments, submerged Bt maize leaves leached Cry1Ab into stream water with 1% of the protein remaining in leaves after 70 d. Laboratory experiments suggested that dissolved Cry1Ab protein degraded rapidly in microcosms containing water-column microorganisms, and light did not enhance breakdown by stimulating assimilatory uptake of the protein by autotrophs. Here, the common detection of Cry1Ab protein in streams sampled across an agricultural landscape, combined with laboratory studies showing rapid leaching and degradation, suggests that Cry1Ab may be pseudo-persistent at the watershed scale due to the multiple input pathways from the surrounding terrestrial environment.« less

  19. Occurrence, leaching, and degradation of Cry1Ab protein from transgenic maize detritus in agricultural streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffiths, Natalie A.; Tank, Jennifer L.; Royer, Todd V.

    The insecticidal Cry1Ab protein expressed by transgenic (Bt) maize can enter adjacent water bodies via multiple pathways, but its fate in stream ecosystems is not as well studied as in terrestrial systems. In this study, we used a combination of field sampling and laboratory experiments to examine the occurrence, leaching, and degradation of soluble Cry1Ab protein derived from Bt maize in agricultural streams. We surveyed 11 agricultural streams in northwestern Indiana, USA, on 6 dates that encompassed the growing season, crop harvest, and snowmelt/spring flooding, and detected Cry1Ab protein in the water column and in flowing subsurface tile drains atmore » concentrations of 3–60 ng/L. In a series of laboratory experiments, submerged Bt maize leaves leached Cry1Ab into stream water with 1% of the protein remaining in leaves after 70 d. Laboratory experiments suggested that dissolved Cry1Ab protein degraded rapidly in microcosms containing water-column microorganisms, and light did not enhance breakdown by stimulating assimilatory uptake of the protein by autotrophs. Here, the common detection of Cry1Ab protein in streams sampled across an agricultural landscape, combined with laboratory studies showing rapid leaching and degradation, suggests that Cry1Ab may be pseudo-persistent at the watershed scale due to the multiple input pathways from the surrounding terrestrial environment.« less

  20. Multi-Scale, Direct and Indirect Effects of the Urban Stream Syndrome on Amphibian Communities in Streams

    PubMed Central

    Canessa, Stefano; Parris, Kirsten M.

    2013-01-01

    Urbanization affects streams by modifying hydrology, increasing pollution and disrupting in-stream and riparian conditions, leading to negative responses by biotic communities. Given the global trend of increasing urbanization, improved understanding of its direct and indirect effects at multiple scales is needed to assist management. The theory of stream ecology suggests that the riverscape and the surrounding landscape are inextricably linked, and watershed-scale processes will also affect in-stream conditions and communities. This is particularly true for species with semi-aquatic life cycles, such as amphibians, which transfer energy between streams and surrounding terrestrial areas. We related measures of urbanization at different scales to frog communities in streams along an urbanization gradient in Melbourne, Australia. We used boosted regression trees to determine the importance of predictors and the shape of species responses. We then used structural equation models to investigate possible indirect effects of watershed imperviousness on in-stream parameters. The proportion of riparian vegetation and road density surrounding the site at the reach scale (500-m radius) had positive and negative effects, respectively, on species richness and on the occurrence of the two most common species in the area ( Crinia signifera and Limnodynastesdumerilii ). Road density and local aquatic vegetation interacted in influencing species richness, suggesting that isolation of a site can prevent colonization, in spite of apparently good local habitat. Attenuated imperviousness at the catchment scale had a negative effect on local aquatic vegetation, indicating possible indirect effects on frog species not revealed by single-level models. Processes at the landscape scale, particularly related to individual ranging distances, can affect frog species directly and indirectly. Catchment imperviousness might not affect adult frogs directly, but by modifying hydrology it can disrupt local vegetation and prove indirectly detrimental. Integrating multiple-scale management actions may help to meet conservation targets for streams in the face of urbanization. PMID:23922963

  1. Revealing the ecological content of long-duration audio-recordings of the environment through clustering and visualisation.

    PubMed

    Phillips, Yvonne F; Towsey, Michael; Roe, Paul

    2018-01-01

    Audio recordings of the environment are an increasingly important technique to monitor biodiversity and ecosystem function. While the acquisition of long-duration recordings is becoming easier and cheaper, the analysis and interpretation of that audio remains a significant research area. The issue addressed in this paper is the automated reduction of environmental audio data to facilitate ecological investigations. We describe a method that first reduces environmental audio to vectors of acoustic indices, which are then clustered. This can reduce the audio data by six to eight orders of magnitude yet retain useful ecological information. We describe techniques to visualise sequences of cluster occurrence (using for example, diel plots, rose plots) that assist interpretation of environmental audio. Colour coding acoustic clusters allows months and years of audio data to be visualised in a single image. These techniques are useful in identifying and indexing the contents of long-duration audio recordings. They could also play an important role in monitoring long-term changes in species abundance brought about by habitat degradation and/or restoration.

  2. Revealing the ecological content of long-duration audio-recordings of the environment through clustering and visualisation

    PubMed Central

    Towsey, Michael; Roe, Paul

    2018-01-01

    Audio recordings of the environment are an increasingly important technique to monitor biodiversity and ecosystem function. While the acquisition of long-duration recordings is becoming easier and cheaper, the analysis and interpretation of that audio remains a significant research area. The issue addressed in this paper is the automated reduction of environmental audio data to facilitate ecological investigations. We describe a method that first reduces environmental audio to vectors of acoustic indices, which are then clustered. This can reduce the audio data by six to eight orders of magnitude yet retain useful ecological information. We describe techniques to visualise sequences of cluster occurrence (using for example, diel plots, rose plots) that assist interpretation of environmental audio. Colour coding acoustic clusters allows months and years of audio data to be visualised in a single image. These techniques are useful in identifying and indexing the contents of long-duration audio recordings. They could also play an important role in monitoring long-term changes in species abundance brought about by habitat degradation and/or restoration. PMID:29494629

  3. Unisensory processing and multisensory integration in schizophrenia: A high-density electrical mapping study

    PubMed Central

    Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.

    2011-01-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011

  4. Flexible Robotic Entry Device for nuclear materials production reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckendorn, F.M.

    1988-01-01

    The Savannah River Laboratory (SRL) has developed and is implementing a Flexible Robotic Entry Device (FRED) for the nuclear materials production reactors at the Savannah River Plant (SRP). FRED is designed for rapid deployment into confinement areas of operating reactors to assess unknown conditions. A unique ''smart tether'' method has been incorporated into FRED for simultaneous bidirectional transmission of multiple video/audio/control/power signals over a single coaxial cable. 3 figs.

  5. Evaluating local indirect addressing in SIMD proc essors

    NASA Technical Reports Server (NTRS)

    Middleton, David; Tomboulian, Sherryl

    1989-01-01

    In the design of parallel computers, there exists a tradeoff between the number and power of individual processors. The single instruction stream, multiple data stream (SIMD) model of parallel computers lies at one extreme of the resulting spectrum. The available hardware resources are devoted to creating the largest possible number of processors, and consequently each individual processor must use the fewest possible resources. Disagreement exists as to whether SIMD processors should be able to generate addresses individually into their local data memory, or all processors should access the same address. The tradeoff is examined between the increased capability and the reduced number of processors that occurs in this single instruction stream, multiple, locally addressed, data (SIMLAD) model. Factors are assembled that affect this design choice, and the SIMLAD model is compared with the bare SIMD and the MIMD models.

  6. Holographic disk with high data transfer rate: its application to an audio response memory.

    PubMed

    Kubota, K; Ono, Y; Kondo, M; Sugama, S; Nishida, N; Sakaguchi, M

    1980-03-15

    This paper describes a memory realized with a high data transfer rate using the holographic parallel-processing function and its application to an audio response system that supplies many audio messages to many terminals simultaneously. Digitalized audio messages are recorded as tiny 1-D Fourier transform holograms on a holographic disk. A hologram recorder and a hologram reader were constructed to test and demonstrate the holographic audio response memory feasibility. Experimental results indicate the potentiality of an audio response system with a 2000-word vocabulary and 250-Mbit/sec bit transfer rate.

  7. Classroom sound can be used to classify teaching practices in college science courses.

    PubMed

    Owens, Melinda T; Seidel, Shannon B; Wong, Mike; Bejines, Travis E; Lietz, Susanne; Perez, Joseph R; Sit, Shangheng; Subedar, Zahur-Saleh; Acker, Gigi N; Akana, Susan F; Balukjian, Brad; Benton, Hilary P; Blair, J R; Boaz, Segal M; Boyer, Katharyn E; Bram, Jason B; Burrus, Laura W; Byrd, Dana T; Caporale, Natalia; Carpenter, Edward J; Chan, Yee-Hung Mark; Chen, Lily; Chovnick, Amy; Chu, Diana S; Clarkson, Bryan K; Cooper, Sara E; Creech, Catherine; Crow, Karen D; de la Torre, José R; Denetclaw, Wilfred F; Duncan, Kathleen E; Edwards, Amy S; Erickson, Karen L; Fuse, Megumi; Gorga, Joseph J; Govindan, Brinda; Green, L Jeanette; Hankamp, Paul Z; Harris, Holly E; He, Zheng-Hui; Ingalls, Stephen; Ingmire, Peter D; Jacobs, J Rebecca; Kamakea, Mark; Kimpo, Rhea R; Knight, Jonathan D; Krause, Sara K; Krueger, Lori E; Light, Terrye L; Lund, Lance; Márquez-Magaña, Leticia M; McCarthy, Briana K; McPheron, Linda J; Miller-Sims, Vanessa C; Moffatt, Christopher A; Muick, Pamela C; Nagami, Paul H; Nusse, Gloria L; Okimura, Kristine M; Pasion, Sally G; Patterson, Robert; Pennings, Pleuni S; Riggs, Blake; Romeo, Joseph; Roy, Scott W; Russo-Tait, Tatiane; Schultheis, Lisa M; Sengupta, Lakshmikanta; Small, Rachel; Spicer, Greg S; Stillman, Jonathon H; Swei, Andrea; Wade, Jennifer M; Waters, Steven B; Weinstein, Steven L; Willsie, Julia K; Wright, Diana W; Harrison, Colin D; Kelley, Loretta A; Trujillo, Gloriana; Domingo, Carmen R; Schinske, Jeffrey N; Tanner, Kimberly D

    2017-03-21

    Active-learning pedagogies have been repeatedly demonstrated to produce superior learning gains with large effect sizes compared with lecture-based pedagogies. Shifting large numbers of college science, technology, engineering, and mathematics (STEM) faculty to include any active learning in their teaching may retain and more effectively educate far more students than having a few faculty completely transform their teaching, but the extent to which STEM faculty are changing their teaching methods is unclear. Here, we describe the development and application of the machine-learning-derived algorithm Decibel Analysis for Research in Teaching (DART), which can analyze thousands of hours of STEM course audio recordings quickly, with minimal costs, and without need for human observers. DART analyzes the volume and variance of classroom recordings to predict the quantity of time spent on single voice (e.g., lecture), multiple voice (e.g., pair discussion), and no voice (e.g., clicker question thinking) activities. Applying DART to 1,486 recordings of class sessions from 67 courses, a total of 1,720 h of audio, revealed varied patterns of lecture (single voice) and nonlecture activity (multiple and no voice) use. We also found that there was significantly more use of multiple and no voice strategies in courses for STEM majors compared with courses for non-STEM majors, indicating that DART can be used to compare teaching strategies in different types of courses. Therefore, DART has the potential to systematically inventory the presence of active learning with ∼90% accuracy across thousands of courses in diverse settings with minimal effort.

  8. Classroom sound can be used to classify teaching practices in college science courses

    PubMed Central

    Seidel, Shannon B.; Wong, Mike; Bejines, Travis E.; Lietz, Susanne; Perez, Joseph R.; Sit, Shangheng; Subedar, Zahur-Saleh; Acker, Gigi N.; Akana, Susan F.; Balukjian, Brad; Benton, Hilary P.; Blair, J. R.; Boaz, Segal M.; Boyer, Katharyn E.; Bram, Jason B.; Burrus, Laura W.; Byrd, Dana T.; Caporale, Natalia; Carpenter, Edward J.; Chan, Yee-Hung Mark; Chen, Lily; Chovnick, Amy; Chu, Diana S.; Clarkson, Bryan K.; Cooper, Sara E.; Creech, Catherine; Crow, Karen D.; de la Torre, José R.; Denetclaw, Wilfred F.; Duncan, Kathleen E.; Edwards, Amy S.; Erickson, Karen L.; Fuse, Megumi; Gorga, Joseph J.; Govindan, Brinda; Green, L. Jeanette; Hankamp, Paul Z.; Harris, Holly E.; He, Zheng-Hui; Ingalls, Stephen; Ingmire, Peter D.; Jacobs, J. Rebecca; Kamakea, Mark; Kimpo, Rhea R.; Knight, Jonathan D.; Krause, Sara K.; Krueger, Lori E.; Light, Terrye L.; Lund, Lance; Márquez-Magaña, Leticia M.; McCarthy, Briana K.; McPheron, Linda J.; Miller-Sims, Vanessa C.; Moffatt, Christopher A.; Muick, Pamela C.; Nagami, Paul H.; Nusse, Gloria L.; Okimura, Kristine M.; Pasion, Sally G.; Patterson, Robert; Riggs, Blake; Romeo, Joseph; Roy, Scott W.; Russo-Tait, Tatiane; Schultheis, Lisa M.; Sengupta, Lakshmikanta; Small, Rachel; Spicer, Greg S.; Stillman, Jonathon H.; Swei, Andrea; Wade, Jennifer M.; Waters, Steven B.; Weinstein, Steven L.; Willsie, Julia K.; Wright, Diana W.; Harrison, Colin D.; Kelley, Loretta A.; Trujillo, Gloriana; Domingo, Carmen R.; Schinske, Jeffrey N.; Tanner, Kimberly D.

    2017-01-01

    Active-learning pedagogies have been repeatedly demonstrated to produce superior learning gains with large effect sizes compared with lecture-based pedagogies. Shifting large numbers of college science, technology, engineering, and mathematics (STEM) faculty to include any active learning in their teaching may retain and more effectively educate far more students than having a few faculty completely transform their teaching, but the extent to which STEM faculty are changing their teaching methods is unclear. Here, we describe the development and application of the machine-learning–derived algorithm Decibel Analysis for Research in Teaching (DART), which can analyze thousands of hours of STEM course audio recordings quickly, with minimal costs, and without need for human observers. DART analyzes the volume and variance of classroom recordings to predict the quantity of time spent on single voice (e.g., lecture), multiple voice (e.g., pair discussion), and no voice (e.g., clicker question thinking) activities. Applying DART to 1,486 recordings of class sessions from 67 courses, a total of 1,720 h of audio, revealed varied patterns of lecture (single voice) and nonlecture activity (multiple and no voice) use. We also found that there was significantly more use of multiple and no voice strategies in courses for STEM majors compared with courses for non-STEM majors, indicating that DART can be used to compare teaching strategies in different types of courses. Therefore, DART has the potential to systematically inventory the presence of active learning with ∼90% accuracy across thousands of courses in diverse settings with minimal effort. PMID:28265087

  9. Logistic Stick-Breaking Process

    PubMed Central

    Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.

    2013-01-01

    A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593

  10. Data Acquisition and Linguistic Resources

    NASA Astrophysics Data System (ADS)

    Strassel, Stephanie; Christianson, Caitlin; McCary, John; Staderman, William; Olive, Joseph

    All human language technology demands substantial quantities of data for system training and development, plus stable benchmark data to measure ongoing progress. While creation of high quality linguistic resources is both costly and time consuming, such data has the potential to profoundly impact not just a single evaluation program but language technology research in general. GALE's challenging performance targets demand linguistic data on a scale and complexity never before encountered. Resources cover multiple languages (Arabic, Chinese, and English) and multiple genres -- both structured (newswire and broadcast news) and unstructured (web text, including blogs and newsgroups, and broadcast conversation). These resources include significant volumes of monolingual text and speech, parallel text, and transcribed audio combined with multiple layers of linguistic annotation, ranging from word aligned parallel text and Treebanks to rich semantic annotation.

  11. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. 78 FR 38093 - Seventh Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-25

    ... Committee 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... 226, Audio Systems and Equipment [[Page 38094

  13. Slow Scan Telemedicine

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Originally developed under contract for NASA by Ball Bros. Research Corporation for acquiring visual information from lunar and planetary spacecraft, system uses standard closed circuit camera connected to a device called a scan converter, which slows the stream of images to match an audio circuit, such as a telephone line. Transmitted to its destination, the image is reconverted by another scan converter and displayed on a monitor. In addition to assist scans, technique allows transmission of x-rays, nuclear scans, ultrasonic imagery, thermograms, electrocardiograms or live views of patient. Also allows conferencing and consultation among medical centers, general practitioners, specialists and disease control centers. Commercialized by Colorado Video, Inc., major employment is in business and industry for teleconferencing, cable TV news, transmission of scientific/engineering data, security, information retrieval, insurance claim adjustment, instructional programs, and remote viewing of advertising layouts, real estate, construction sites or products.

  14. Diagnostic accuracy of sleep bruxism scoring in absence of audio-video recording: a pilot study.

    PubMed

    Carra, Maria Clotilde; Huynh, Nelly; Lavigne, Gilles J

    2015-03-01

    Based on the most recent polysomnographic (PSG) research diagnostic criteria, sleep bruxism is diagnosed when >2 rhythmic masticatory muscle activity (RMMA)/h of sleep are scored on the masseter and/or temporalis muscles. These criteria have not yet been validated for portable PSG systems. This pilot study aimed to assess the diagnostic accuracy of scoring sleep bruxism in absence of audio-video recordings. Ten subjects (mean age 24.7 ± 2.2) with a clinical diagnosis of sleep bruxism spent one night in the sleep laboratory. PSG were performed with a portable system (type 2) while audio-video was recorded. Sleep studies were scored by the same examiner three times: (1) without, (2) with, and (3) without audio-video in order to test the intra-scoring and intra-examiner reliability for RMMA scoring. The RMMA event-by-event concordance rate between scoring without audio-video and with audio-video was 68.3 %. Overall, the RMMA index was overestimated by 23.8 % without audio-video. However, the intra-class correlation coefficient (ICC) between scorings with and without audio-video was good (ICC = 0.91; p < 0.001); the intra-examiner reliability was high (ICC = 0.97; p < 0.001). The clinical diagnosis of sleep bruxism was confirmed in 8/10 subjects based on scoring without audio-video and in 6/10 subjects with audio-video. Although the absence of audio-video recording, the diagnostic accuracy of assessing RMMA with portable PSG systems appeared to remain good, supporting their use for both research and clinical purposes. However, the risk of moderate overestimation in absence of audio-video must be taken into account.

  15. Using Infrared Thermography to Assess Emotional Responses to Infants

    ERIC Educational Resources Information Center

    Esposito, Gianluca; Nakazawa, Jun; Ogawa, Shota; Stival, Rita; Putnick, Diane L.; Bornstein, Marc H.

    2015-01-01

    Adult-infant interactions operate simultaneously across multiple domains and at multiple levels -- from physiology to behaviour. Unpackaging and understanding them, therefore, involve analysis of multiple data streams. In this study, we tested physiological responses and cognitive preferences for infant and adult faces in adult females and males.…

  16. An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.

    PubMed

    Lin, C F; Hung, S I; Chiang, I H

    2010-10-01

    In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.

  17. Digital watermarking for secure and adaptive teleconferencing

    NASA Astrophysics Data System (ADS)

    Vorbrueggen, Jan C.; Thorwirth, Niels

    2002-04-01

    The EC-sponsored project ANDROID aims to develop a management system for secure active networks. Active network means allowing the network's customers to execute code (Java-based so-called proxylets) on parts of the network infrastructure. Secure means that the network operator nonetheless retains full control over the network and its resources, and that proxylets use ANDROID-developed facilities to provide secure applications. Management is based on policies and allows autonomous, distributed decisions and actions to be taken. Proxylets interface with the system via policies; among actions they can take is controlling execution of other proxylets or redirection of network traffic. Secure teleconferencing is used as the application to demonstrate the approach's advantages. A way to control a teleconference's data streams is to use digital watermarking of the video, audio and/or shared-whiteboard streams, providing an imperceptible and inseparable side channel that delivers information from originating or intermediate stations to downstream stations. Depending on the information carried by the watermark, these stations can take many different actions. Examples are forwarding decisions based on security classifications (possibly time-varying) at security boundaries, set-up and tear-down of virtual private networks, intelligent and adaptive transcoding, recorder or playback control (e.g., speaking off the record), copyright protection, and sender authentication.

  18. Design and develop a video conferencing framework for real-time telemedicine applications using secure group-based communication architecture.

    PubMed

    Mat Kiah, M L; Al-Bakri, S H; Zaidan, A A; Zaidan, B B; Hussain, Muzammil

    2014-10-01

    One of the applications of modern technology in telemedicine is video conferencing. An alternative to traveling to attend a conference or meeting, video conferencing is becoming increasingly popular among hospitals. By using this technology, doctors can help patients who are unable to physically visit hospitals. Video conferencing particularly benefits patients from rural areas, where good doctors are not always available. Telemedicine has proven to be a blessing to patients who have no access to the best treatment. A telemedicine system consists of customized hardware and software at two locations, namely, at the patient's and the doctor's end. In such cases, the video streams of the conferencing parties may contain highly sensitive information. Thus, real-time data security is one of the most important requirements when designing video conferencing systems. This study proposes a secure framework for video conferencing systems and a complete management solution for secure video conferencing groups. Java Media Framework Application Programming Interface classes are used to design and test the proposed secure framework. Real-time Transport Protocol over User Datagram Protocol is used to transmit the encrypted audio and video streams, and RSA and AES algorithms are used to provide the required security services. Results show that the encryption algorithm insignificantly increases the video conferencing computation time.

  19. Stream Processors

    NASA Astrophysics Data System (ADS)

    Erez, Mattan; Dally, William J.

    Stream processors, like other multi core architectures partition their functional units and storage into multiple processing elements. In contrast to typical architectures, which contain symmetric general-purpose cores and a cache hierarchy, stream processors have a significantly leaner design. Stream processors are specifically designed for the stream execution model, in which applications have large amounts of explicit parallel computation, structured and predictable control, and memory accesses that can be performed at a coarse granularity. Applications in the streaming model are expressed in a gather-compute-scatter form, yielding programs with explicit control over transferring data to and from on-chip memory. Relying on these characteristics, which are common to many media processing and scientific computing applications, stream architectures redefine the boundary between software and hardware responsibilities with software bearing much of the complexity required to manage concurrency, locality, and latency tolerance. Thus, stream processors have minimal control consisting of fetching medium- and coarse-grained instructions and executing them directly on the many ALUs. Moreover, the on-chip storage hierarchy of stream processors is under explicit software control, as is all communication, eliminating the need for complex reactive hardware mechanisms.

  20. High-Fidelity Piezoelectric Audio Device

    NASA Technical Reports Server (NTRS)

    Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.

    2003-01-01

    ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.

  1. Using Algal Metrics and Biomass to Evaluate Multiple Ways of Defining Concentration-Based Nutrient Criteria in Streams and their Ecological Relevance

    EPA Science Inventory

    We examined the utility of nutrient criteria derived solely from total phosphorus (TP) concentrations in streams (regression models and percentile distributions) and evaluated their ecological relevance to diatom and algal biomass responses. We used a variety of statistics to cha...

  2. Multiple drivers, scales, and interactions influence southern Appalachian stream salamander occupancy

    Treesearch

    Kristen K. Cecala; John C. Maerz; Brian J. Halstead; John R. Frisch; Ted L. Gragson; Jeffrey Hepinstall-Cymerman; David S. Leigh; C. Rhett Jackson; James T. Peterson; Catherine M. Pringle

    2018-01-01

    Understanding how factors that vary in spatial scale relate to population abundance is vital to forecasting species responses to environmental change. Stream and river ecosystems are inherently hierarchical, potentially resulting in organismal responses to fine‐scale changes in patch characteristics that are conditional on the watershed context. Here, we...

  3. A COMPARISON OF SINGLE AND MULTIPLE HABITAT RAPID BIOASSESSMENT SAMPLING METHODS FOR MACROINVERTEBRATES IN PIEDMONT AND NORTHERN PIEDMONT STREAMS

    EPA Science Inventory

    Stream macroinvertebrate collection methods described in the Rapid Bioassessment Protocols (RBPs) have been used widely throughout the U.S. The first edition of the RBP manual in 1989 described a single habitat approach that focused on riffles and runs, where macroinvertebrate d...

  4. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Instruction Stream, Multiple Data Stream Machines .................... 19 2.4 Networks of M achines...independent memory units and connecting them to the processors by an interconnection network . Many different interconnection schemes have been considered, and...connected to the same processor at the same time. Crossbar switching networks are still too expensive to be practical for connecting large numbers of

  5. Modeling stream network-scale variation in Coho salmon overwinter survival and smolt size

    Treesearch

    Joseph L. Ebersole; Mike E. Colvin; Parker J. Wigington; Scott G. Leibowitz; Joan P. Baker; Jana E. Compton; Bruce A. Miller; Michael A. Carins; Bruce P. Hansen; Henry R. La Vigne

    2009-01-01

    We used multiple regression and hierarchical mixed-effects models to examine spatial patterns of overwinter survival and size at smolting in juvenile coho salmon Oncorhynchus kisutch in relation to habitat attributes across an extensive stream network in southwestern Oregon over 3 years. Contributing basin area explained the majority of spatial...

  6. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.

  7. Validation of a digital audio recording method for the objective assessment of cough in the horse.

    PubMed

    Duz, M; Whittaker, A G; Love, S; Parkin, T D H; Hughes, K J

    2010-10-01

    To validate the use of digital audio recording and analysis for quantification of coughing in horses. Part A: Nine simultaneous digital audio and video recordings were collected individually from seven stabled horses over a 1 h period using a digital audio recorder attached to the halter. Audio files were analysed using audio analysis software. Video and audio recordings were analysed for cough count and timing by two blinded operators on two occasions using a randomised study design for determination of intra-operator and inter-operator agreement. Part B: Seventy-eight hours of audio recordings obtained from nine horses were analysed once by two blinded operators to assess inter-operator repeatability on a larger sample. Part A: There was complete agreement between audio and video analyses and inter- and intra-operator analyses. Part B: There was >97% agreement between operators on number and timing of 727 coughs recorded over 78 h. The results of this study suggest that the cough monitor methodology used has excellent sensitivity and specificity for the objective assessment of cough in horses and intra- and inter-operator variability of recorded coughs is minimal. Crown Copyright 2010. Published by Elsevier India Pvt Ltd. All rights reserved.

  8. Sensing of metal-transfer mode for process control of GMAW (gas metal arc welding)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, N.M.; Johnson, J.A.; Smartt, H.B.

    1989-01-01

    One of the requirements of a sensing system for feedback control of gas metal arc welding (GMAW) is the capability to determine the metal-transfer mode. Because the operating boundary for the desired transfer mode, expressed as a function of mass input and heat input, may vary due to conditions beyond the control of the system, a means of detecting the transfer mode during welding is necessary. A series of sensing experiments was performed during which the ultrasonic emissions, audio emissions, welding current fluctuations and welding voltage fluctuations were recorded as a function of the transfer mode. In addition, high speedmore » movies (5000 frames/s) of the droplet formation and detachment were taken synchronously with the sensing data. An LED mounted in the camera was used to mark the film at the beginning and end of the data acquisition period. A second LED was pulsed at a 1 kHz rate and the pulses recorded on film and with the sensor data. Thus events recorded on the film can be correlated with the sensor data. Data acquired during globular transfer, spray transfer, and stiff spray or streaming transfer were observed to correlate with droplet detachment and arc shorting. The audio, current, and voltage data can be used to discriminate among these different transfer modes. However, the current and voltage data are also dependent on the characteristic of the welding power supply. 5 refs., 3 figs., 1 tab.« less

  9. Detection of metal-transfer mode in GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, J.A.; Carlson, N.M.; Smartt, H.B.

    1989-01-01

    One of the requirements of a sensing system for feedback control of gas metal arc welding (GMAW) is the capability to detect information about the metal-transfer mode. Because the operating boundary for the desired transfer mode, expressed as a function of mass input and heat input, may vary due to conditions beyond the control of the system, a means of determining the transfer mode during welding is necessary. A series of sensing experiments is performed during which the ultrasonic emissions, audio emissions, welding current fluctuations, and welding voltage fluctuations are recorded as a function of the transfer mode. In addition,more » high speed movies (5000 frame/s) of the droplet formation and detachment are taken synchronously with the sensing data. An LED mounted in the camera is used to work the film at the beginning and end of the data acquisition period. A second LED is pulsed at a 1 kHz rate and the pulses are recorded on film and with the sensor data. Thus events observed on the film can be correlated with the sensor data. Data acquired during globular transfer, spray transfer, and stiff spray or streaming transfer are observed to correlate with droplet detachment and arc shorting. The audio, current, and voltage data can be used to discriminate among these different transfer modes. However, the current and voltage data are also dependent on the characteristics of the welding power supply. 4 refs., 5 figs.« less

  10. Secure and Usable User-in-a-Context Continuous Authentication in Smartphones Leveraging Non-Assisted Sensors.

    PubMed

    de Fuentes, Jose Maria; Gonzalez-Manzano, Lorena; Ribagorda, Arturo

    2018-04-16

    Smartphones are equipped with a set of sensors that describe the environment (e.g., GPS, noise, etc.) and their current status and usage (e.g., battery consumption, accelerometer readings, etc.). Several works have already addressed how to leverage such data for user-in-a-context continuous authentication, i.e., determining if the porting user is the authorized one and resides in his regular physical environment. This can be useful for an early reaction against robbery or impersonation. However, most previous works depend on assisted sensors, i.e., they rely upon immutable elements (e.g., cell towers, satellites, magnetism), thus being ineffective in their absence. Moreover, they focus on accuracy aspects, neglecting usability ones. For this purpose, in this paper, we explore the use of four non-assisted sensors, namely battery, transmitted data, ambient light and noise. Our approach leverages data stream mining techniques and offers a tunable security-usability trade-off. We assess the accuracy, immediacy, usability and readiness of the proposal. Results on 50 users over 24 months show that battery readings alone achieve 97.05% of accuracy and 81.35% for audio, light and battery all together. Moreover, when usability is at stake, robbery is detected in 100 s for the case of battery and in 250 s when audio, light and battery are applied. Remarkably, these figures are obtained with moderate training and storage needs, thus making the approach suitable for current devices.

  11. Secure and Usable User-in-a-Context Continuous Authentication in Smartphones Leveraging Non-Assisted Sensors

    PubMed Central

    Gonzalez-Manzano, Lorena; Ribagorda, Arturo

    2018-01-01

    Smartphones are equipped with a set of sensors that describe the environment (e.g., GPS, noise, etc.) and their current status and usage (e.g., battery consumption, accelerometer readings, etc.). Several works have already addressed how to leverage such data for user-in-a-context continuous authentication, i.e., determining if the porting user is the authorized one and resides in his regular physical environment. This can be useful for an early reaction against robbery or impersonation. However, most previous works depend on assisted sensors, i.e., they rely upon immutable elements (e.g., cell towers, satellites, magnetism), thus being ineffective in their absence. Moreover, they focus on accuracy aspects, neglecting usability ones. For this purpose, in this paper, we explore the use of four non-assisted sensors, namely battery, transmitted data, ambient light and noise. Our approach leverages data stream mining techniques and offers a tunable security-usability trade-off. We assess the accuracy, immediacy, usability and readiness of the proposal. Results on 50 users over 24 months show that battery readings alone achieve 97.05% of accuracy and 81.35% for audio, light and battery all together. Moreover, when usability is at stake, robbery is detected in 100 s for the case of battery and in 250 s when audio, light and battery are applied. Remarkably, these figures are obtained with moderate training and storage needs, thus making the approach suitable for current devices. PMID:29659542

  12. Cortical Tracking of Global and Local Variations of Speech Rhythm during Connected Natural Speech Perception.

    PubMed

    Alexandrou, Anna Maria; Saarinen, Timo; Kujala, Jan; Salmelin, Riitta

    2018-06-19

    During natural speech perception, listeners must track the global speaking rate, that is, the overall rate of incoming linguistic information, as well as transient, local speaking rate variations occurring within the global speaking rate. Here, we address the hypothesis that this tracking mechanism is achieved through coupling of cortical signals to the amplitude envelope of the perceived acoustic speech signals. Cortical signals were recorded with magnetoencephalography (MEG) while participants perceived spontaneously produced speech stimuli at three global speaking rates (slow, normal/habitual, and fast). Inherently to spontaneously produced speech, these stimuli also featured local variations in speaking rate. The coupling between cortical and acoustic speech signals was evaluated using audio-MEG coherence. Modulations in audio-MEG coherence spatially differentiated between tracking of global speaking rate, highlighting the temporal cortex bilaterally and the right parietal cortex, and sensitivity to local speaking rate variations, emphasizing the left parietal cortex. Cortical tuning to the temporal structure of natural connected speech thus seems to require the joint contribution of both auditory and parietal regions. These findings suggest that cortical tuning to speech rhythm operates on two functionally distinct levels: one encoding the global rhythmic structure of speech and the other associated with online, rapidly evolving temporal predictions. Thus, it may be proposed that speech perception is shaped by evolutionary tuning, a preference for certain speaking rates, and predictive tuning, associated with cortical tracking of the constantly changing rate of linguistic information in a speech stream.

  13. Inter-regional comparison of land-use effects on stream metabolism

    USGS Publications Warehouse

    Bernot, M.J.; Sobota, D.J.; Hall, R.O.; Mulholland, P.J.; Dodds, W.K.; Webster, J.R.; Tank, J.L.; Ashkenas, L.R.; Cooper, L.W.; Dahm, Clifford N.; Gregory, S.V.; Grimm, N. B.; Hamilton, S.K.; Johnson, S.L.; McDowell, W.H.; Meyer, J.L.; Peterson, B.; Poole, G.C.; Maurice, Valett H.M.; Arango, C.; Beaulieu, J.J.; Burgin, A.J.; Crenshaw, C.; Helton, A.M.; Johnson, L.; Merriam, J.; Niederlehner, B.R.; O'Brien, J. M.; Potter, J.D.; Sheibley, R.W.; Thomas, S.M.; Wilson, K.

    2010-01-01

    1. Rates of whole-system metabolism (production and respiration) are fundamental indicators of ecosystem structure and function. Although first-order, proximal controls are well understood, assessments of the interactions between proximal controls and distal controls, such as land use and geographic region, are lacking. Thus, the influence of land use on stream metabolism across geographic regions is unknown. Further, there is limited understanding of how land use may alter variability in ecosystem metabolism across regions.2. Stream metabolism was measured in nine streams in each of eight regions (n = 72) across the United States and Puerto Rico. In each region, three streams were selected from a range of three land uses: agriculturally influenced, urban-influenced, and reference streams. Stream metabolism was estimated from diel changes in dissolved oxygen concentrations in each stream reach with correction for reaeration and groundwater input.3. Gross primary production (GPP) was highest in regions with little riparian vegetation (sagebrush steppe in Wyoming, desert shrub in Arizona/New Mexico) and lowest in forested regions (North Carolina, Oregon). In contrast, ecosystem respiration (ER) varied both within and among regions. Reference streams had significantly lower rates of GPP than urban or agriculturally influenced streams.4. GPP was positively correlated with photosynthetically active radiation and autotrophic biomass. Multiple regression models compared using Akaike's information criterion (AIC) indicated GPP increased with water column ammonium and the fraction of the catchment in urban and reference land-use categories. Multiple regression models also identified velocity, temperature, nitrate, ammonium, dissolved organic carbon, GPP, coarse benthic organic matter, fine benthic organic matter and the fraction of all land-use categories in the catchment as regulators of ER.5. Structural equation modelling indicated significant distal as well as proximal control pathways including a direct effect of land-use on GPP as well as SRP, DIN, and PAR effects on GPP; GPP effects on autotrophic biomass, organic matter, and ER; and organic matter effects on ER.6. Overall, consideration of the data separated by land-use categories showed reduced inter-regional variability in rates of metabolism, indicating that the influence of agricultural and urban land use can obscure regional differences in stream metabolism. ?? 2010 Blackwell Publishing Ltd.

  14. 47 CFR 73.9005 - Compliance requirements for covered demodulator products: Audio.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... products: Audio. 73.9005 Section 73.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED....9005 Compliance requirements for covered demodulator products: Audio. Except as otherwise provided in §§ 73.9003(a) or 73.9004(a), covered demodulator products shall not output the audio portions of...

  15. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  16. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  17. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  18. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  19. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  20. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  1. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 9 2012-10-01 2012-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  2. 47 CFR 87.483 - Audio visual warning systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Audio visual warning systems. 87.483 Section 87... AVIATION SERVICES Stations in the Radiodetermination Service § 87.483 Audio visual warning systems. An audio visual warning system (AVWS) is a radar-based obstacle avoidance system. AVWS activates...

  3. Automated Assessment of Child Vocalization Development Using LENA.

    PubMed

    Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-07-12

    To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.

  4. Bayesian networks and information theory for audio-visual perception modeling.

    PubMed

    Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis

    2010-09-01

    Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.

  5. Speech to Text Translation for Malay Language

    NASA Astrophysics Data System (ADS)

    Al-khulaidi, Rami Ali; Akmeliawati, Rini

    2017-11-01

    The speech recognition system is a front end and a back-end process that receives an audio signal uttered by a speaker and converts it into a text transcription. The speech system can be used in several fields including: therapeutic technology, education, social robotics and computer entertainments. In most cases in control tasks, which is the purpose of proposing our system, wherein the speed of performance and response concern as the system should integrate with other controlling platforms such as in voiced controlled robots. Therefore, the need for flexible platforms, that can be easily edited to jibe with functionality of the surroundings, came to the scene; unlike other software programs that require recording audios and multiple training for every entry such as MATLAB and Phoenix. In this paper, a speech recognition system for Malay language is implemented using Microsoft Visual Studio C#. 90 (ninety) Malay phrases were tested by 10 (ten) speakers from both genders in different contexts. The result shows that the overall accuracy (calculated from Confusion Matrix) is satisfactory as it is 92.69%.

  6. Semantic Context Detection Using Audio Event Fusion

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Ta; Cheng, Wen-Huang; Wu, Ja-Ling

    2006-12-01

    Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  7. Neural Entrainment to Rhythmically Presented Auditory, Visual, and Audio-Visual Speech in Children

    PubMed Central

    Power, Alan James; Mead, Natasha; Barnes, Lisa; Goswami, Usha

    2012-01-01

    Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal “samples” of information from the speech stream at different rates, phase resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (“phase locking”). Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate) based on repetition of the syllable “ba,” presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a “talking head”). To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the “ba” stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a “ba” in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling, such as dyslexia. PMID:22833726

  8. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  9. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  10. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  11. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  12. 36 CFR § 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Audio disturbances. § 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  13. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  14. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  15. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  16. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  17. ENERGY STAR Certified Audio Video

    EPA Pesticide Factsheets

    Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of May 1, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/index.cfm?c=audio_dvd.pr_crit_audio_dvd

  18. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  19. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  20. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  1. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  2. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  3. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  4. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  5. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  6. A comprehensive operating room information system using the Kinect sensors and RFID.

    PubMed

    Nouei, Mahyar Taghizadeh; Kamyad, Ali Vahidian; Soroush, Ahmad Reza; Ghazalbash, Somayeh

    2015-04-01

    Occasionally, surgeons do need various types of information to be available rapidly, efficiently and safely during surgical procedures. Meanwhile, they need to free up hands throughout the surgery to necessarily access the mouse to control any application in the sterility mode. In addition, they are required to record audio as well as video files, and enter and save some data. This is an attempt to develop a comprehensive operating room information system called "Medinav" to tackle all mentioned issues. An integrated and comprehensive operating room information system is introduced to be compatible with Health Level 7 (HL7) and digital imaging and communications in medicine (DICOM). DICOM is a standard for handling, storing, printing, and transmitting information in medical imaging. Besides, a natural user interface (NUI) is designed specifically for operating rooms where touch-less interactions with finger and hand tracking are in use. Further, the system could both record procedural data automatically, and view acquired information from multiple perspectives graphically. A prototype system is tested in a live operating room environment at an Iranian teaching hospital. There are also contextual interviews and usability satisfaction questionnaires conducted with the "MediNav" system to investigate how useful the proposed system could be. The results reveal that integration of these systems into a complete solution is the key to not only stream up data and workflow but maximize surgical team usefulness as well. It is now possible to comprehensively collect and visualize medical information, and access a management tool with a touch-less NUI in a rather quick, practical, and harmless manner.

  7. NASA Tech Briefs, March 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Advanced Systems for Monitoring Underwater Sounds; Wireless Data-Acquisition System for Testing Rocket Engines; Processing Raw HST Data With Up-to-Date Calibration Data; Mobile Collection and Automated Interpretation of EEG Data; System for Secure Integration of Aviation Data; Servomotor and Controller Having Large Dynamic Range; Digital Multicasting of Multiple Audio Streams; Translator for Optimizing Fluid-Handling Components; AIRSAR Web-Based Data Processing; Pattern Matcher for Trees Constructed From Lists; Reducing a Knowledge-Base Search Space When Data Are Missing; Ground-Based Correction of Remote-Sensing Spectral Imagery; State-Chart Autocoder; Pointing History Engine for the Spitzer Space Telescope; Low-Friction, High-Stiffness Joint for Uniaxial Load Cell; Magnet-Based System for Docking of Miniature Spacecraft; Electromechanically Actuated Valve for Controlling Flow Rate; Plumbing Fixture for a Microfluidic Cartridge; Camera Mount for a Head-Up Display; Core-Cutoff Tool; Recirculation of Laser Power in an Atomic Fountain; Simplified Generation of High-Angular-Momentum Light Beams; Imaging Spectrometer on a Chip; Interferometric Quantum-Nondemolition Single-Photon Detectors; Ring-Down Spectroscopy for Characterizing a CW Raman Laser; Complex Type-II Interband Cascade MQW Photodetectors; Single-Point Access to Data Distributed on Many Processors; Estimating Dust and Water Ice Content of the Martian Atmosphere From THEMIS Data; Computing a Stability Spectrum by Use of the HHT; Theoretical Studies of Routes to Synthesis of Tetrahedral N4; Estimation Filter for Alignment of the Spitzer Space Telescope; Antenna for Measuring Electric Fields Within the Inner Heliosphere; Improved High-Voltage Gas Isolator for Ion Thruster; and Hybrid Mobile Communication Networks for Planetary Exploration.

  8. Multiple jet study data correlations. [data correlation for jet mixing flow of air jets

    NASA Technical Reports Server (NTRS)

    Walker, R. E.; Eberhardt, R. G.

    1975-01-01

    Correlations are presented which allow determination of penetration and mixing of multiple cold air jets injected normal to a ducted subsonic heated primary air stream. Correlations were obtained over jet-to-primary stream momentum flux ratios of 6 to 60 for locations from 1 to 30 jet diameters downstream of the injection plane. The range of geometric and operating variables makes the correlations relevant to gas turbine combustors. Correlations were obtained for the mixing efficiency between jets and primary stream using an energy exchange parameter. Also jet centerplane velocity and temperature trajectories were correlated and centerplane dimensionless temperature distributions defined. An assumption of a Gaussian vertical temperature distribution at all stations is shown to result in a reasonable temperature field model. Data are presented which allow comparison of predicted and measured values over the range of conditions specified above.

  9. Sedimentation in mountain streams: A review of methods of measurement

    USGS Publications Warehouse

    Hedrick, Lara B.; Anderson, James T.; Welsh, Stuart A.; Lin, Lian-Shin

    2013-01-01

    The goal of this review paper is to provide a list of methods and devices used to measure sediment accumulation in wadeable streams dominated by cobble and gravel substrate. Quantitative measures of stream sedimentation are useful to monitor and study anthropogenic impacts on stream biota, and stream sedimentation is measurable with multiple sampling methods. Evaluation of sedimentation can be made by measuring the concentration of suspended sediment, or turbidity, and by determining the amount of deposited sediment, or sedimentation on the streambed. Measurements of deposited sediments are more time consuming and labor intensive than measurements of suspended sediments. Traditional techniques for characterizing sediment composition in streams include core sampling, the shovel method, visual estimation along transects, and sediment traps. This paper provides a comprehensive review of methodology, devices that can be used, and techniques for processing and analyzing samples collected to aid researchers in choosing study design and equipment.

  10. A comparative analysis reveals weak relationships between ecological factors and beta diversity of stream insect metacommunities at two spatial levels.

    PubMed

    Heino, Jani; Melo, Adriano S; Bini, Luis Mauricio; Altermatt, Florian; Al-Shami, Salman A; Angeler, David G; Bonada, Núria; Brand, Cecilia; Callisto, Marcos; Cottenie, Karl; Dangles, Olivier; Dudgeon, David; Encalada, Andrea; Göthe, Emma; Grönroos, Mira; Hamada, Neusa; Jacobsen, Dean; Landeiro, Victor L; Ligeiro, Raphael; Martins, Renato T; Miserendino, María Laura; Md Rawi, Che Salmah; Rodrigues, Marciel E; Roque, Fabio de Oliveira; Sandin, Leonard; Schmera, Denes; Sgarbi, Luciano F; Simaika, John P; Siqueira, Tadeu; Thompson, Ross M; Townsend, Colin R

    2015-03-01

    The hypotheses that beta diversity should increase with decreasing latitude and increase with spatial extent of a region have rarely been tested based on a comparative analysis of multiple datasets, and no such study has focused on stream insects. We first assessed how well variability in beta diversity of stream insect metacommunities is predicted by insect group, latitude, spatial extent, altitudinal range, and dataset properties across multiple drainage basins throughout the world. Second, we assessed the relative roles of environmental and spatial factors in driving variation in assemblage composition within each drainage basin. Our analyses were based on a dataset of 95 stream insect metacommunities from 31 drainage basins distributed around the world. We used dissimilarity-based indices to quantify beta diversity for each metacommunity and, subsequently, regressed beta diversity on insect group, latitude, spatial extent, altitudinal range, and dataset properties (e.g., number of sites and percentage of presences). Within each metacommunity, we used a combination of spatial eigenfunction analyses and partial redundancy analysis to partition variation in assemblage structure into environmental, shared, spatial, and unexplained fractions. We found that dataset properties were more important predictors of beta diversity than ecological and geographical factors across multiple drainage basins. In the within-basin analyses, environmental and spatial variables were generally poor predictors of variation in assemblage composition. Our results revealed deviation from general biodiversity patterns because beta diversity did not show the expected decreasing trend with latitude. Our results also call for reconsideration of just how predictable stream assemblages are along ecological gradients, with implications for environmental assessment and conservation decisions. Our findings may also be applicable to other dynamic systems where predictability is low.

  11. Bayesian Tracking within a Feedback Sensing Environment: Estimating Interacting, Spatially Constrained Complex Dynamical Systems from Multiple Sources of Controllable Devices

    DTIC Science & Technology

    2014-07-25

    composition of simple temporal structures to a speaker diarization task with the goal of segmenting conference audio in the presence of an unknown number of...application domains including neuroimaging, diverse document selection, speaker diarization , stock modeling, and target tracking. We detail each of...recall performance than competing methods in a task of discovering articles preferred by the user • a gold-standard speaker diarization method, as

  12. Sounding ruins: reflections on the production of an 'audio drift'.

    PubMed

    Gallagher, Michael

    2015-07-01

    This article is about the use of audio media in researching places, which I term 'audio geography'. The article narrates some episodes from the production of an 'audio drift', an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners' attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies.

  13. Sounding ruins: reflections on the production of an ‘audio drift’

    PubMed Central

    Gallagher, Michael

    2014-01-01

    This article is about the use of audio media in researching places, which I term ‘audio geography’. The article narrates some episodes from the production of an ‘audio drift’, an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners’ attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies. PMID:29708107

  14. DETECTOR FOR MODULATED AND UNMODULATED SIGNALS

    DOEpatents

    Patterson, H.H.; Webber, G.H.

    1959-08-25

    An r-f signal-detecting device is described, which is embodied in a compact coaxial circuit principally comprising a detecting crystal diode and a modulating crystal diode connected in parallel. Incoming modulated r-f signals are demodulated by the detecting crystal diode to furnish an audio input to an audio amplifier. The detecting diode will not, however, produce an audio signal from an unmodulated r-f signal. In order that unmodulated signals may be detected, such incoming signals have a locally produced audio signal superimposed on them at the modulating crystal diode and then the"induced or artificially modulated" signal is reflected toward the detecting diode which in the process of demodulation produces an audio signal for the audio amplifier.

  15. Speech information retrieval: a review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Henry, Michael J.

    Audio is an information-rich component of multimedia. Information can be extracted from audio in a number of different ways, and thus there are several established audio signal analysis research fields. These fields include speech recognition, speaker recognition, audio segmentation and classification, and audio finger-printing. The information that can be extracted from tools and methods developed in these fields can greatly enhance multimedia systems. In this paper, we present the current state of research in each of the major audio analysis fields. The goal is to introduce enough back-ground for someone new in the field to quickly gain high-level understanding andmore » to provide direction for further study.« less

  16. Multiple narratives: How underserved urban girls engage in co-authoring life stories and scientific stories

    NASA Astrophysics Data System (ADS)

    Thompson, Jessica Jane

    Contemporary critics of science education have noted that girls often fail to engage in learning because they cannot "see themselves" in science. Yet theory on identity, engagement, and the appropriation of scientific discourse remains underdeveloped. Using identity as a lens, I constructed 2-two week lunchtime science sessions for 17 ethnic-minority high school girls who were failing their science classes. The units of instruction were informed by a pilot study and based on principles from literature on engagement in identity work and engagement in productive disciplinary discourse. Primary data sources included 19 hours of videotaped lunchtime sessions, 88 hours of audio-taped individual student interviews (over the course of 3--4 years), and 10 hours of audio-taped small group interviews. Secondary data sources included student journals, 48 hours of observations of science classes, teacher surveys about student participation, and academic school records. I used a case-study approach with narrative and discourse analysis. Not only were the girls individually involved in negotiating ideas about their narratives about themselves and their future selves, but collectively some of the girls productively negotiated multiple identities, appropriated scientific and epistemological discourse and learned science content. This was accomplished through the use of a hybrid discourse that blended identity talk with science talk. The use of this talk supported these girls in taking ownership for or becoming advocates for certain scientific ideas.

  17. Estimating the magnitude of peak discharges for selected flood frequencies on small streams in South Carolina (1975)

    USGS Publications Warehouse

    Whetstone, B.H.

    1982-01-01

    A program to collect and analyze flood data from small streams in South Carolina was conducted from 1967-75, as a cooperative research project with the South Carolina Department of Highways and Public Transportation and the Federal Highway Administration. As a result of that program, a technique is presented for estimating the magnitude and frequency of floods on small streams in South Carolina with drainage areas ranging in size from 1 to 500 square miles. Peak-discharge data from 74 stream-gaging stations (25 small streams were synthesized, whereas 49 stations had long-term records) were used in multiple regression procedures to obtain equations for estimating magnitude of floods having recurrence intervals of 10, 25, 50, and 100 years on small natural streams. The significant independent variable was drainage area. Equations were developed for the three physiographic provinces of South Carolina (Coastal Plain, Piedmont, and Blue Ridge) and can be used for estimating floods on small streams. (USGS)

  18. The Effectiveness of Streaming Video on Medical Student Learning: A Case Study

    PubMed Central

    Bridge, Patrick D.; Jackson, Matt; Robinson, Leah

    2009-01-01

    Information technology helps meet today's medical students’ needs by providing multiple curriculum delivery methods. Video streaming is an e-learning technology that uses the Internet to deliver curriculum while giving the student control of the content's delivery. There have been few studies conducted on the effectiveness of streaming video in medical schools. A 5-year retrospective study was conducted using three groups of students (n = 1736) to determine if the availability of streaming video in Years 1–2 of the basic science curriculum affected overall Step 1 scores for first-time test-takers. The results demonstrated a positive effect on program outcomes as streaming video became more readily available to students. Based on these findings, streaming video technology seems to be a viable tool to complement in-class delivery methods, to accommodate the needs of medical students, and to provide options for meeting the challenges of delivering the undergraduate medical curriculum. Further studies need to be conducted to continue validating the effectiveness of streaming video technology. PMID:20165525

  19. The effectiveness of streaming video on medical student learning: a case study.

    PubMed

    Bridge, Patrick D; Jackson, Matt; Robinson, Leah

    2009-08-19

    Information technology helps meet today's medical students' needs by providing multiple curriculum delivery methods. Video streaming is an e-learning technology that uses the Internet to deliver curriculum while giving the student control of the content's delivery. There have been few studies conducted on the effectiveness of streaming video in medical schools. A 5-year retrospective study was conducted using three groups of students (n = 1736) to determine if the availability of streaming video in Years 1-2 of the basic science curriculum affected overall Step 1 scores for first-time test-takers. The results demonstrated a positive effect on program outcomes as streaming video became more readily available to students. Based on these findings, streaming video technology seems to be a viable tool to complement in-class delivery methods, to accommodate the needs of medical students, and to provide options for meeting the challenges of delivering the undergraduate medical curriculum. Further studies need to be conducted to continue validating the effectiveness of streaming video technology.

  20. Characteristics of audio and sub-audio telluric signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Telford, W.M.

    1977-06-01

    Telluric current measurements in the audio and sub-audio frequency range, made in various parts of Canada and South America over the past four years, indicate that the signal amplitude is relatively uniform over 6 to 8 midday hours (LMT) except in Chile and that the signal anisotropy is reasonably constant in azimuth.

  1. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  2. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  3. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  4. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  5. 78 FR 18416 - Sixth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... 226, Audio Systems and Equipment. DATES: The meeting will be held April 15-17, 2013 from 9:00 a.m.-5...

  6. Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study

    ERIC Educational Resources Information Center

    Romero-Fresco, Pablo; Fryer, Louise

    2013-01-01

    Introduction: Time constraints limit the quantity and type of information conveyed in audio description (AD) for films, in particular the cinematic aspects. Inspired by introductory notes for theatre AD, this study developed audio introductions (AIs) for "Slumdog Millionaire" and "Man on Wire." Each AI comprised 10 minutes of…

  7. Audio-Vision: Audio-Visual Interaction in Desktop Multimedia.

    ERIC Educational Resources Information Center

    Daniels, Lee

    Although sophisticated multimedia authoring applications are now available to amateur programmers, the use of audio in of these programs has been inadequate. Due to the lack of research in the use of audio in instruction, there are few resources to assist the multimedia producer in using sound effectively and efficiently. This paper addresses the…

  8. Audio Frequency Analysis in Mobile Phones

    ERIC Educational Resources Information Center

    Aguilar, Horacio Munguía

    2016-01-01

    A new experiment using mobile phones is proposed in which its audio frequency response is analyzed using the audio port for inputting external signal and getting a measurable output. This experiment shows how the limited audio bandwidth used in mobile telephony is the main cause of the poor speech quality in this service. A brief discussion is…

  9. A Longitudinal, Quantitative Study of Student Attitudes towards Audio Feedback for Assessment

    ERIC Educational Resources Information Center

    Parkes, Mitchell; Fletcher, Peter

    2017-01-01

    This paper reports on the findings of a three-year longitudinal study investigating the experiences of postgraduate level students who were provided with audio feedback for their assessment. Results indicated that students positively received audio feedback. Overall, students indicated a preference for audio feedback over written feedback. No…

  10. Audio-Tutorial Instruction: A Strategy For Teaching Introductory College Geology.

    ERIC Educational Resources Information Center

    Fenner, Peter; Andrews, Ted F.

    The rationale of audio-tutorial instruction is discussed, and the history and development of the audio-tutorial botany program at Purdue University is described. Audio-tutorial programs in geology at eleven colleges and one school are described, illustrating several ways in which programs have been developed and integrated into courses. Programs…

  11. Deep learning

    NASA Astrophysics Data System (ADS)

    Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey

    2015-05-01

    Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

  12. Deep learning.

    PubMed

    LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey

    2015-05-28

    Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

  13. Audio-video decision support for patients: the documentary genré as a basis for decision aids.

    PubMed

    Volandes, Angelo E; Barry, Michael J; Wood, Fiona; Elwyn, Glyn

    2013-09-01

    Decision support tools are increasingly using audio-visual materials. However, disagreement exists about the use of audio-visual materials as they may be subjective and biased. This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio-visual materials. Three concerns arising from documentary film studies as they apply to the use of audio-visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio-visual materials (selection bias) and how to ensure objectivity (editorial bias). Decision science needs to start a debate about how audio-visual materials are to be used in decision support tools. Simply because audio-visual materials may be subjective and open to bias does not mean that we should not use them. Methods need to be found to ensure consensus around balance and editorial control, such that audio-visual materials can be used. © 2011 John Wiley & Sons Ltd.

  14. Using Infrared Thermography to Assess Emotional Responses to Infants.

    PubMed

    Esposito, Gianluca; Nakazawa, Jun; Ogawa, Shota; Stival, Rita; Putnick, Diane L; Bornstein, Marc H

    2015-01-01

    Adult-infant interactions operate simultaneously across multiple domains and at multiple levels - from physiology to behavior. Unpackaging and understanding them, therefore, involves analysis of multiple data streams. In this study, we tested physiological responses and cognitive preferences for infant and adult faces in adult females and males. Infrared thermography was used to assess facial temperature changes as a measure of emotional valence, and we used a behavioral rating system to assess adults' expressed preferences. We found greater physiological activation in response to infant stimuli in females than males. As for cognitive preferences, we found greater responses to adult stimuli than to infant stimuli, both in males and females. The results are discuss in light of the Life History Theory. Finally, we discuss the importance of integrating the two data streams on our conclusions.

  15. An overview of the Columbia Habitat Monitoring Program's (CHaMP) spatial-temporal design framework

    EPA Science Inventory

    We briefly review the concept of a master sample applied to stream networks in which a randomized set of stream sites is selected across a broad region to serve as a list of sites from which a subset of sites is selected to achieve multiple objectives of specific designs. The Col...

  16. Long-term dynamics of organic matter and elements exported as coarse particulates from two Caribbean montane watersheds

    Treesearch

    T. Heartsill Scalley; F.N. Scatena; S. Moya; A.E. Lugo

    2012-01-01

    In heterotrophic streams the retention and export of coarse particulate organic matter and associated elements are fundamental biogeochemical processes that influence water quality, food webs and the structural complexity of forested headwater streams. Nevertheless, few studies have documented the quantity and quality of exported organic matter over multiple years and...

  17. Spatial Variations In The Fate And Transport Of Metals In A Mining-Influenced Stream, North Fork Clear Creek, Colorado

    EPA Science Inventory

    North Fork Clear Creek (NFCC) receives acid-mine drainage (AMD) from multiple abandoned mines in the Clear Creek Watershed. Point sources of AMD originate In the Black Hawk/Central City region of the stream. Water chemistry also is influenced by several non-point sources of AMD,...

  18. Time dependent emission line profiles in the radially streaming particle model of Seyfert galaxy nuclei and quasi-stellar objects

    NASA Technical Reports Server (NTRS)

    Hubbard, R.

    1974-01-01

    The radially-streaming particle model for broad quasar and Seyfert galaxy emission features is modified to include sources of time dependence. The results are suggestive of reported observations of multiple components, variability, and transient features in the wings of Seyfert and quasi-stellar emission lines.

  19. 'What' and 'where' in the human brain.

    PubMed

    Ungerleider, L G; Haxby, J V

    1994-04-01

    Multiple visual areas in the cortex of nonhuman primates are organized into two hierarchically organized and functionally specialized processing pathways, a 'ventral stream' for object vision and a 'dorsal stream' for spatial vision. Recent findings from positron emission tomography activation studies have localized these pathways within the human brain, yielding insights into cortical hierarchies, specialization of function, and attentional mechanisms.

  20. Audio Motor Training at the Foot Level Improves Space Representation.

    PubMed

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body.

  1. Audio Motor Training at the Foot Level Improves Space Representation

    PubMed Central

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body. PMID:29326564

  2. Effectiveness and Comparison of Various Audio Distraction Aids in Management of Anxious Dental Paediatric Patients.

    PubMed

    Navit, Saumya; Johri, Nikita; Khan, Suleman Abbas; Singh, Rahul Kumar; Chadha, Dheera; Navit, Pragati; Sharma, Anshul; Bahuguna, Rachana

    2015-12-01

    Dental anxiety is a widespread phenomenon and a concern for paediatric dentistry. The inability of children to deal with threatening dental stimuli often manifests as behaviour management problems. Nowadays, the use of non-aversive behaviour management techniques is more advocated, which are more acceptable to parents, patients and practitioners. Therefore, this present study was conducted to find out which audio aid was the most effective in the managing anxious children. The aim of the present study was to compare the efficacy of audio-distraction aids in reducing the anxiety of paediatric patients while undergoing various stressful and invasive dental procedures. The objectives were to ascertain whether audio distraction is an effective means of anxiety management and which type of audio aid is the most effective. A total number of 150 children, aged between 6 to 12 years, randomly selected amongst the patients who came for their first dental check-up, were placed in five groups of 30 each. These groups were the control group, the instrumental music group, the musical nursery rhymes group, the movie songs group and the audio stories group. The control group was treated under normal set-up & audio group listened to various audio presentations during treatment. Each child had four visits. In each visit, after the procedures was completed, the anxiety levels of the children were measured by the Venham's Picture Test (VPT), Venham's Clinical Rating Scale (VCRS) and pulse rate measurement with the help of pulse oximeter. A significant difference was seen between all the groups for the mean pulse rate, with an increase in subsequent visit. However, no significant difference was seen in the VPT & VCRS scores between all the groups. Audio aids in general reduced anxiety in comparison to the control group, and the most significant reduction in anxiety level was observed in the audio stories group. The conclusion derived from the present study was that audio distraction was effective in reducing anxiety and audio-stories were the most effective.

  3. EPA Office of Water (OW): 2002 SPARROW Total NP (Catchments)

    EPA Pesticide Factsheets

    SPARROW (SPAtially Referenced Regressions On Watershed attributes) is a watershed modeling tool with output that allows the user to interpret water quality monitoring data at the regional and sub-regional scale. The model relates in-stream water-quality measurements to spatially referenced characteristics of watersheds, including pollutant sources and environmental factors that affect rates of pollutant delivery to streams from the land and aquatic, in-stream processing . The core of the model consists of a nonlinear regression equation describing the non-conservative transport of contaminants from point and non-point (or ??diffuse??) sources on land to rivers and through the stream and river network. SPARROW estimates contaminant concentrations, loads (or ??mass,?? which is the product of concentration and streamflow), and yields in streams (mass of nitrogen and of phosphorus entering a stream per acre of land). It empirically estimates the origin and fate of contaminants in streams and receiving bodies, and quantifies uncertainties in model predictions. The model predictions are illustrated through detailed maps that provide information about contaminant loadings and source contributions at multiple scales for specific stream reaches, basins, or other geographic areas.

  4. Multivariate classification of small order watersheds in the Quabbin Reservoir Basin, Massachusetts

    USGS Publications Warehouse

    Lent, R.M.; Waldron, M.C.; Rader, J.C.

    1998-01-01

    A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.

  5. Responding Effectively to Composition Students: Comparing Student Perceptions of Written and Audio Feedback

    ERIC Educational Resources Information Center

    Bilbro, J.; Iluzada, C.; Clark, D. E.

    2013-01-01

    The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…

  6. Design and Usability Testing of an Audio Platform Game for Players with Visual Impairments

    ERIC Educational Resources Information Center

    Oren, Michael; Harding, Chris; Bonebright, Terri L.

    2008-01-01

    This article reports on the evaluation of a novel audio platform game that creates a spatial, interactive experience via audio cues. A pilot study with players with visual impairments, and usability testing comparing the visual and audio game versions using both sighted players and players with visual impairments, revealed that all the…

  7. 78 FR 57673 - Eighth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... Committee 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... Committee 226, Audio Systems and Equipment. DATES: The meeting will be held October 8-10, 2012 from 9:00 a.m...

  8. 77 FR 37732 - Fourteenth Meeting: RTCA Special Committee 224, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... Committee 224, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 224, Audio Systems and Equipment. SUMMARY... Committee 224, Audio Systems and Equipment. DATES: The meeting will be held July 11, 2012, from 10 a.m.-4 p...

  9. 76 FR 57923 - Establishment of Rules and Policies for the Satellite Digital Audio Radio Service in the 2310...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ... Rules and Policies for the Satellite Digital Audio Radio Service in the 2310-2360 MHz Frequency Band... Digital Audio Radio Service (SDARS) Second Report and Order. The information collection requirements were... of these rule sections. See Satellite Digital Audio Radio Service (SDARS) Second Report and Order...

  10. The Use of Asynchronous Audio Feedback with Online RN-BSN Students

    ERIC Educational Resources Information Center

    London, Julie E.

    2013-01-01

    The use of audio technology by online nursing educators is a recent phenomenon. Research has been conducted in the area of audio technology in different domains and populations, but very few researchers have focused on nursing. Preliminary results have indicated that using audio in place of text can increase student cognition and socialization.…

  11. Computerized Audio-Visual Instructional Sequences (CAVIS): A Versatile System for Listening Comprehension in Foreign Language Teaching.

    ERIC Educational Resources Information Center

    Aleman-Centeno, Josefina R.

    1983-01-01

    Discusses the development and evaluation of CAVIS, which consists of an Apple microcomputer used with audiovisual dialogs. Includes research on the effects of three conditions: (1) computer with audio and visual, (2) computer with audio alone and (3) audio alone in short-term and long-term recall. (EKN)

  12. Low-delay predictive audio coding for the HIVITS HDTV codec

    NASA Astrophysics Data System (ADS)

    McParland, A. K.; Gilchrist, N. H. C.

    1995-01-01

    The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.

  13. Cost effectiveness of the stream-gaging program in northeastern California

    USGS Publications Warehouse

    Hoffard, S.H.; Pearce, V.F.; Tasker, Gary D.; Doyle, W.H.

    1984-01-01

    Results are documented of a study of the cost effectiveness of the stream-gaging program in northeastern California. Data uses and funding sources were identified for the 127 continuous stream gages currently being operated in the study area. One stream gage was found to have insufficient data use to warrant cooperative Federal funding. Flow-routing and multiple-regression models were used to simulate flows at selected gaging stations. The models may be sufficiently accurate to replace two of the stations. The average standard error of estimate of streamflow records is 12.9 percent. This overall level of accuracy could be reduced to 12.0 percent using computer-recommended service routes and visit frequencies. (USGS)

  14. Shifting stream planform state decreases stream productivity yet increases riparian animal production.

    PubMed

    Venarsky, Michael P; Walters, David M; Hall, Robert O; Livers, Bridget; Wohl, Ellen

    2018-05-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging < 200 years ago) are single-channeled with mostly erosional habitat. We tested how these opposing stream states influenced organic matter, benthic macroinvertebrate secondary production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m -2 ), but values were 2 ×-21 × higher in undisturbed reaches per unit of stream valley (m -1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream-riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  15. The effects of road crossings on prairie stream habitat and function

    USGS Publications Warehouse

    Bouska, Wesley W.; Keane, Timothy; Paukert, Craig P.

    2010-01-01

    Improperly designed stream crossing structures may alter the form and function of stream ecosystems and habitat and prohibit the movement of aquatic organisms. Stream sections adjoining five concrete box culverts, five low-water crossings (concrete slabs vented by one or multiple culverts), and two large, single corrugated culvert vehicle crossings in eastern Kansas streams were compared to reference reaches using a geomorphologic survey and stream classification. Stream reaches were also compared upstream and downstream of crossings, and crossing measurements were used to determine which crossing design best mimicked the natural dimensions of the adjoining stream. Four of five low-water crossings, three of five box culverts, and one of two large, single corrugated pipe culverts changed classification from upstream to downstream of the crossings. Mean riffle spacing upstream at low-water crossings (8.6 bankfull widths) was double that of downstream reaches (mean 4.4 bankfull widths) but was similar upstream and downstream of box and corrugated pipe culverts. There also appeared to be greater deposition of fine sediments directly upstream of these designs. Box and corrugated culverts were more similar to natural streams than low-water crossings at transporting water, sediments, and debris during bankfull flows.

  16. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    PubMed Central

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718

  17. Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study.

    PubMed

    Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M

    2011-10-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  19. Effects of land cover, topography, and built structure on seasonal water quality at multiple spatial scales.

    PubMed

    Pratt, Bethany; Chang, Heejun

    2012-03-30

    The relationship among land cover, topography, built structure and stream water quality in the Portland Metro region of Oregon and Clark County, Washington areas, USA, is analyzed using ordinary least squares (OLS) and geographically weighted (GWR) multiple regression models. Two scales of analysis, a sectional watershed and a buffer, offered a local and a global investigation of the sources of stream pollutants. Model accuracy, measured by R(2) values, fluctuated according to the scale, season, and regression method used. While most wet season water quality parameters are associated with urban land covers, most dry season water quality parameters are related topographic features such as elevation and slope. GWR models, which take into consideration local relations of spatial autocorrelation, had stronger results than OLS regression models. In the multiple regression models, sectioned watershed results were consistently better than the sectioned buffer results, except for dry season pH and stream temperature parameters. This suggests that while riparian land cover does have an effect on water quality, a wider contributing area needs to be included in order to account for distant sources of pollutants. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less

  1. Comparing the Effects of Classroom Audio-Recording and Video-Recording on Preservice Teachers' Reflection of Practice

    ERIC Educational Resources Information Center

    Bergman, Daniel

    2015-01-01

    This study examined the effects of audio and video self-recording on preservice teachers' written reflections. Participants (n = 201) came from a secondary teaching methods course and its school-based (clinical) fieldwork. The audio group (n[subscript A] = 106) used audio recorders to monitor their teaching in fieldwork placements; the video group…

  2. Transana Qualitative Video and Audio Analysis Software as a Tool for Teaching Intellectual Assessment Skills to Graduate Psychology Students

    ERIC Educational Resources Information Center

    Rush, S. Craig

    2014-01-01

    This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…

  3. Audio distribution and Monitoring Circuit

    NASA Technical Reports Server (NTRS)

    Kirkland, J. M.

    1983-01-01

    Versatile circuit accepts and distributes TV audio signals. Three-meter audio distribution and monitoring circuit provides flexibility in monitoring, mixing, and distributing audio inputs and outputs at various signal and impedance levels. Program material is simultaneously monitored on three channels, or single-channel version built to monitor transmitted or received signal levels, drive speakers, interface to building communications, and drive long-line circuits.

  4. Hearing You Loud and Clear: Student Perspectives of Audio Feedback in Higher Education

    ERIC Educational Resources Information Center

    Gould, Jill; Day, Pat

    2013-01-01

    The use of audio feedback for students in a full-time community nursing degree course is appraised. The aim of this mixed methods study was to examine student views on audio feedback for written assignments. Questionnaires and a focus group were used to capture student opinion of this pilot project. The majority of students valued audio feedback…

  5. Space Shuttle Orbiter audio subsystem. [to communication and tracking system

    NASA Technical Reports Server (NTRS)

    Stewart, C. H.

    1978-01-01

    The selection of the audio multiplex control configuration for the Space Shuttle Orbiter audio subsystem is discussed and special attention is given to the evaluation criteria of cost, weight and complexity. The specifications and design of the subsystem are described and detail is given to configurations of the audio terminal and audio central control unit (ATU, ACCU). The audio input from the ACCU, at a signal level of -12.2 to 14.8 dBV, nominal range, at 1 kHz, was found to have balanced source impedance and a balanced local impedance of 6000 + or - 600 ohms at 1 kHz, dc isolated. The Lyndon B. Johnson Space Center (JSC) electroacoustic test laboratory, an audio engineering facility consisting of a collection of acoustic test chambers, analyzed problems of speaker and headset performance, multiplexed control data coupled with audio channels, and the Orbiter cabin acoustic effects on the operational performance of voice communications. This system allows technical management and project engineering to address key constraining issues, such as identifying design deficiencies of the headset interface unit and the assessment of the Orbiter cabin performance of voice communications, which affect the subsystem development.

  6. Spatialized audio improves call sign recognition during multi-aircraft control.

    PubMed

    Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J

    2018-07-01

    We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.

  7. Implementing Audio-CASI on Windows’ Platforms

    PubMed Central

    Cooley, Philip C.; Turner, Charles F.

    2011-01-01

    Audio computer-assisted self interviewing (Audio-CASI) technologies have recently been shown to provide important and sometimes dramatic improvements in the quality of survey measurements. This is particularly true for measurements requiring respondents to divulge highly sensitive information such as their sexual, drug use, or other sensitive behaviors. However, DOS-based Audio-CASI systems that were designed and adopted in the early 1990s have important limitations. Most salient is the poor control they provide for manipulating the video presentation of survey questions. This article reports our experiences adapting Audio-CASI to Microsoft Windows 3.1 and Windows 95 platforms. Overall, our Windows-based system provided the desired control over video presentation and afforded other advantages including compatibility with a much wider array of audio devices than our DOS-based Audio-CASI technologies. These advantages came at the cost of increased system requirements --including the need for both more RAM and larger hard disks. While these costs will be an issue for organizations converting large inventories of PCS to Windows Audio-CASI today, this will not be a serious constraint for organizations and individuals with small inventories of machines to upgrade or those purchasing new machines today. PMID:22081743

  8. Shifting stream planform state decreases stream productivity yet increases riparian animal production

    USGS Publications Warehouse

    Venarsky, Michael P.; Walters, David M.; Hall, Robert O.; Livers, Bridget; Wohl, Ellen

    2018-01-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging < 200 years ago) are single-channeled with mostly erosional habitat. We tested how these opposing stream states influenced organic matter, benthic macroinvertebrate secondary production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m−2), but values were 2 ×–21 × higher in undisturbed reaches per unit of stream valley (m−1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream–riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  9. Experimental service of 3DTV broadcasting relay in Korea

    NASA Astrophysics Data System (ADS)

    Hur, Namho; Ahn, Chung-Hyun; Ahn, Chieteuk

    2002-11-01

    This paper introduces 3D HDTV relay broadcasting experiments of 2002 FIFA World Cup Korea/Japan using a terrestrial and satellite network. We have developed 3D HDTV cameras, 3D HDTV video multiplexer/demultiplexer, a 3D HDTV receiver, and a 3D HDTV OB van for field productions. By using a terrestrial and satellite network, we distributed a compressed 3D HDTV signal to predetermined demonstration venues which are approved by host broadcast services (HBS), KirchMedia, and FIFA. In this case, we transmitted a 40Mbps MPEG-2 transport stream (DVB-ASI) over a DS-3 network specified in ITU-T Rec. G.703. The video/audio compression formats are MPEG-2 main-profile, high-level and Dolby Digital AC-3 respectively. Then at venues, the recovered left and right images by the 3D HDTV receiver are displayed on a screen with polarized beam projectors.

  10. Predictive motor control of sensory dynamics in Auditory Active Sensing

    PubMed Central

    Morillon, Benjamin; Hackett, Troy A.; Kajikawa, Yoshinao; Schroeder, Charles E.

    2016-01-01

    Neuronal oscillations present potential physiological substrates for brain operations that require temporal prediction. We review this idea in the context of auditory perception. Using speech as an exemplar, we illustrate how hierarchically organized oscillations can be used to parse and encode complex input streams. We then consider the motor system as a major source of rhythms (temporal priors) in auditory processing, that act in concert with attention to sharpen sensory representations and link them across areas. We discuss the anatomo-functional pathways that could mediate this audio-motor interaction, and notably the potential role of the somatosensory cortex. Finally, we reposition temporal predictions in the context of internal models, discussing how they interact with feature-based or spatial predictions. We argue that complementary predictions interact synergistically according to the organizational principles of each sensory system, forming multidimensional filters crucial to perception. PMID:25594376

  11. Development and preliminary validation of an interactive remote physical therapy system.

    PubMed

    Mishra, Anup K; Skubic, Marjorie; Abbott, Carmen

    2015-01-01

    In this paper, we present an interactive physical therapy system (IPTS) for remote quantitative assessment of clients in the home. The system consists of two different interactive interfaces connected through a network, for a real-time low latency video conference using audio, video, skeletal, and depth data streams from a Microsoft Kinect. To test the potential of IPTS, experiments were conducted with 5 independent living senior subjects in Kansas City, MO. Also, experiments were conducted in the lab to validate the real-time biomechanical measures calculated using the skeletal data from the Microsoft Xbox 360 Kinect and Microsoft Xbox One Kinect, with ground truth data from a Vicon motion capture system. Good agreements were found in the validation tests. The results show potential capabilities of the IPTS system to provide remote physical therapy to clients, especially older adults, who may find it difficult to visit the clinic.

  12. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  13. Ocean Instruments Web Site for Undergraduate, Secondary and Informal Education

    NASA Astrophysics Data System (ADS)

    Farrington, J. W.; Nevala, A.; Dolby, L. A.

    2004-12-01

    An Ocean Instruments web site has been developed that makes available information about ocean sampling and measurement instruments and platforms. The site features text, pictures, diagrams and background information written or edited by experts in ocean science and engineering and contains links to glossaries and multimedia technologies including video streaming, audio packages, and searchable databases. The site was developed after advisory meetings with selected professors teaching undergraduate classes who responded to the question, what could Woods Hole Oceanographic Institution supply to enhance undergraduate education in ocean sciences, life sciences, and geosciences? Prototypes were developed and tested with students, potential users, and potential contributors. The site is hosted by WHOI. The initial five instruments featured were provided by four WHOI scientists and engineers and by one Sea Education Association faculty member. The site is now open to contributions from scientists and engineers worldwide. The site will not advertise or promote the use of individual ocean instruments.

  14. Fast two-stream method for computing diurnal-mean actinic flux in vertically inhomogeneous atmospheres

    NASA Technical Reports Server (NTRS)

    Filyushkin, V. V.; Madronich, S.; Brasseur, G. P.; Petropavlovskikh, I. V.

    1994-01-01

    Based on a derivation of the two-stream daytime-mean equations of radiative flux transfer, a method for computing the daytime-mean actinic fluxes in the absorbing and scattering vertically inhomogeneous atmosphere is suggested. The method applies direct daytime integration of the particular solutions of the two-stream approximations or the source functions. It is valid for any duration of period of averaging. The merit of the method is that the multiple scattering computation is carried out only once for the whole averaging period. It can be implemented with a number of widely used two-stream approximations. The method agrees with the results obtained with 200-point multiple scattering calculations. The method was also tested in runs with a 1-km cloud layer with optical depth of 10, as well as with aerosol background. Comparison of the results obtained for a cloud subdivided into 20 layers with those obtained for a one-layer cloud with the same optical parameters showed that direct integration of particular solutions possesses an 'analytical' accuracy. In the case of the source function interpolation, the actinic fluxes calculated above the one-layer and 20-layer clouds agreed within 1%-1.5%, while below the cloud they may differ up to 5% (in the worst case). The ways of enhancing the accuracy (in a 'two-stream sense') and computational efficiency of the method are discussed.

  15. Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection

    PubMed Central

    Denison, Rachel N.; Silver, Michael A.

    2014-01-01

    During binocular rivalry, conflicting images presented to the two eyes compete for perceptual dominance, but the neural basis of this competition is disputed. In interocular switch (IOS) rivalry, rival images periodically exchanged between the two eyes generate one of two types of perceptual alternation: 1) a fast, regular alternation between the images that is time-locked to the stimulus switches and has been proposed to arise from competition at lower levels of the visual processing hierarchy, or 2) a slow, irregular alternation spanning multiple stimulus switches that has been associated with higher levels of the visual system. The existence of these two types of perceptual alternation has been influential in establishing the view that rivalry may be resolved at multiple hierarchical levels of the visual system. We varied the spatial, temporal, and luminance properties of IOS rivalry gratings and found, instead, an association between fast, regular perceptual alternations and processing by the magnocellular stream and between slow, irregular alternations and processing by the parvocellular stream. The magnocellular and parvocellular streams are two early visual pathways that are specialized for the processing of motion and form, respectively. These results provide a new framework for understanding the neural substrates of binocular rivalry that emphasizes the importance of parallel visual processing streams, and not only hierarchical organization, in the perceptual resolution of ambiguities in the visual environment. PMID:21861685

  16. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  17. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  18. Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.

    PubMed

    Alm, Magnus; Behne, Dawn

    2013-10-01

    Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.

  19. WebGL and web audio software lightweight components for multimedia education

    NASA Astrophysics Data System (ADS)

    Chang, Xin; Yuksel, Kivanc; Skarbek, Władysław

    2017-08-01

    The paper presents the results of our recent work on development of contemporary computing platform DC2 for multimedia education usingWebGL andWeb Audio { the W3C standards. Using literate programming paradigm the WEBSA educational tools were developed. It offers for a user (student), the access to expandable collection of WEBGL Shaders and web Audio scripts. The unique feature of DC2 is the option of literate programming, offered for both, the author and the reader in order to improve interactivity to lightweightWebGL andWeb Audio components. For instance users can define: source audio nodes including synthetic sources, destination audio nodes, and nodes for audio processing such as: sound wave shaping, spectral band filtering, convolution based modification, etc. In case of WebGL beside of classic graphics effects based on mesh and fractal definitions, the novel image processing analysis by shaders is offered like nonlinear filtering, histogram of gradients, and Bayesian classifiers.

  20. Design and implementation of an audio indicator

    NASA Astrophysics Data System (ADS)

    Zheng, Shiyong; Li, Zhao; Li, Biqing

    2017-04-01

    This page proposed an audio indicator which designed by using C9014, LED by operational amplifier level indicator, the decimal count/distributor of CD4017. The experimental can control audibly neon and holiday lights through the signal. Input audio signal after C9014 composed of operational amplifier for power amplifier, the adjust potentiometer extraction amplification signal input voltage CD4017 distributors make its drive to count, then connect the LED display running situation of the circuit. This simple audio indicator just use only U1 and can produce two colors LED with the audio signal tandem come pursuit of the running effect, from LED display the running of the situation takes can understand the general audio signal. The variation in the audio and the frequency of the signal and the corresponding level size. In this light can achieve jump to change, slowly, atlas, lighting four forms, used in home, hotel, discos, theater, advertising and other fields, and a wide range of USES, rU1h life in a modern society.

Top