NASA Technical Reports Server (NTRS)
Wright, Jeffrey; Thakur, Siddharth
2006-01-01
Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.
Detection with Enhanced Energy Windowing Phase I Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bass, David A.; Enders, Alexander L.
2016-12-01
This document reviews the progress of Phase I of the Detection with Enhanced Energy Windowing (DEEW) project. The DEEW project is the implementation of software incorporating an algorithm which reviews data generated by radiation portal monitors and utilizes advanced and novel techniques for detecting radiological and fissile material while not alarming on Naturally Occurring Radioactive Material. Independent testing indicated that the Enhanced Energy Windowing algorithm showed promise at reducing the probability of alarm in the stream of commerce compared to existing algorithms and other developmental algorithms, while still maintaining adequate sensitivity to threats. This document contains a brief description ofmore » the project, instructions for setting up and running the applications, and guidance to help make reviewing the output files and source code easier.« less
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing
NASA Astrophysics Data System (ADS)
McCaffrey, Nathaniel J.; Pantuso, Francis P.
1998-03-01
A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Pattern Discovery and Change Detection of Online Music Query Streams
NASA Astrophysics Data System (ADS)
Li, Hua-Fu
In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.
A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na
2013-01-01
We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.
Fast algorithm for automatically computing Strahler stream order
Lanfear, Kenneth J.
1990-01-01
An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.
NASA Astrophysics Data System (ADS)
Ho, G.; Donegan, M.; Vandegriff, J.; Wagstaff, K.
We have created a system for predicting the arrival times at Earth of interplanetary (IP) shocks that originate at the Sun. This system is currently available on the web (http://sd-www.jhuapl.edu/UPOS/RISP/index.html) and runs in real-time. Input data to our prediction algorithm is energetic particle data from the Electron, Proton, and Alpha Monitor (EPAM) instrument on NASA's Advanced Composition Explorer (ACE) spacecraft. Real-time EPAM data is obtained from the National Oceanic and Atmospheric Administration (NOAA) Space Environment Center (SEC). Our algorithm operates in two stages. First it watches for a velocity dispersion signature (energetic ions show flux enhancement followed by subsequent enhancements in lower energies), which is commonly seen upstream of a large IP shock. Once a precursor signature has been detected, a pattern recognition algorithm is used to analyze the time series profile of the particle data and generate an estimate for the shock arrival time. Tests on the algorithm show an average error of roughly 9 hours for predictions made 24 hours before the shock arrival and roughly 5 hours when the shock is 12 hours away. This can provide significant lead-time and deliver critical information to mission planners, satellite operations controllers, and scientists. As of February 4, 2004, the ACE real-time stream has been switched to include data from another detector on EPAM. We are now processing the new real-time data stream and have made improvements to our algorithm based on this data. In this paper, we report prediction results from the updated algorithm.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting
Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693
PRESEE: an MDL/MML algorithm to time-series stream segmenting.
Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.
SOTXTSTREAM: Density-based self-organizing clustering of text streams.
Bryant, Avory C; Cios, Krzysztof J
2017-01-01
A streaming data clustering algorithm is presented building upon the density-based self-organizing stream clustering algorithm SOSTREAM. Many density-based clustering algorithms are limited by their inability to identify clusters with heterogeneous density. SOSTREAM addresses this limitation through the use of local (nearest neighbor-based) density determinations. Additionally, many stream clustering algorithms use a two-phase clustering approach. In the first phase, a micro-clustering solution is maintained online, while in the second phase, the micro-clustering solution is clustered offline to produce a macro solution. By performing self-organization techniques on micro-clusters in the online phase, SOSTREAM is able to maintain a macro clustering solution in a single phase. Leveraging concepts from SOSTREAM, a new density-based self-organizing text stream clustering algorithm, SOTXTSTREAM, is presented that addresses several shortcomings of SOSTREAM. Gains in clustering performance of this new algorithm are demonstrated on several real-world text stream datasets.
Enhancement of A5/1 encryption algorithm
NASA Astrophysics Data System (ADS)
Thomas, Ria Elin; Chandhiny, G.; Sharma, Katyayani; Santhi, H.; Gayathri, P.
2017-11-01
Mobiles have become an integral part of today’s world. Various standards have been proposed for the mobile communication, one of them being GSM. With the rising increase of mobile-based crimes, it is necessary to improve the security of the information passed in the form of voice or data. GSM uses A5/1 for its encryption. It is known that various attacks have been implemented, exploiting the vulnerabilities present within the A5/1 algorithm. Thus, in this paper, we proceed to look at what these vulnerabilities are, and propose the enhanced A5/1 (E-A5/1) where, we try to improve the security provided by the A5/1 algorithm by XORing the key stream generated with a pseudo random number, without increasing the time complexity. We need to study what the vulnerabilities of the base algorithm (A5/1) is, and try to improve upon its security. This will help in the future releases of the A5 family of algorithms.
Interactive collision detection for deformable models using streaming AABBs.
Zhang, Xinyu; Kim, Young J
2007-01-01
We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.
Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matulef, Kevin Michael
The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewermore » resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.« less
NASA Astrophysics Data System (ADS)
Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur
2017-12-01
Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.
STREAMFINDER - I. A new algorithm for detecting stellar streams
NASA Astrophysics Data System (ADS)
Malhan, Khyati; Ibata, Rodrigo A.
2018-07-01
We have designed a powerful new algorithm to detect stellar streams in an automated and systematic way. The algorithm, which we call the STREAMFINDER, is well suited for finding dynamically cold and thin stream structures that may lie along any simple or complex orbits in Galactic stellar surveys containing any combination of positional and kinematic information. In the present contribution, we introduce the algorithm, lay out the ideas behind it, explain the methodology adopted to detect streams, and detail its workings by running it on a suite of simulations of mock Galactic survey data of similar quality to that expected from the European Space Agency/Gaia mission. We show that our algorithm is able to detect even ultra-faint stream features lying well below previous detection limits. Tests show that our algorithm will be able to detect distant halo stream structures >10° long containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia data set.
Delineating baseflow contribution areas for streams - A model and methods comparison
NASA Astrophysics Data System (ADS)
Chow, Reynold; Frind, Michael E.; Frind, Emil O.; Jones, Jon P.; Sousa, Marcelo R.; Rudolph, David L.; Molson, John W.; Nowak, Wolfgang
2016-12-01
This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome.
Use of NTRIP for optimizing the decoding algorithm for real-time data streams.
He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua
2014-10-10
As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system(BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China.
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
NASA Astrophysics Data System (ADS)
Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.
2018-05-01
For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.
Delineating baseflow contribution areas for streams - A model and methods comparison.
Chow, Reynold; Frind, Michael E; Frind, Emil O; Jones, Jon P; Sousa, Marcelo R; Rudolph, David L; Molson, John W; Nowak, Wolfgang
2016-12-01
This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Online feature selection with streaming features.
Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan
2013-05-01
We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.
A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream
Ying Wah, Teh
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753
A fast density-based clustering algorithm for real-time Internet of Things stream.
Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut
2014-01-01
Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.
Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion
Data Streams: An Overview and Scientific Applications
NASA Astrophysics Data System (ADS)
Aggarwal, Charu C.
In recent years, advances in hardware technology have facilitated the ability to collect data continuously. Simple transactions of everyday life such as using a credit card, a phone, or browsing the web lead to automated data storage. Similarly, advances in information technology have lead to large flows of data across IP networks. In many cases, these large volumes of data can be mined for interesting and relevant information in a wide variety of applications. When the volume of the underlying data is very large, it leads to a number of computational and mining challenges: With increasing volume of the data, it is no longer possible to process the data efficiently by using multiple passes. Rather, one can process a data item at most once. This leads to constraints on the implementation of the underlying algorithms. Therefore, stream mining algorithms typically need to be designed so that the algorithms work with one pass of the data. In most cases, there is an inherent temporal component to the stream mining process. This is because the data may evolve over time. This behavior of data streams is referred to as temporal locality. Therefore, a straightforward adaptation of one-pass mining algorithms may not be an effective solution to the task. Stream mining algorithms need to be carefully designed with a clear focus on the evolution of the underlying data. Another important characteristic of data streams is that they are often mined in a distributed fashion. Furthermore, the individual processors may have limited processing and memory. Examples of such cases include sensor networks, in which it may be desirable to perform in-network processing of data stream with limited processing and memory [1, 2]. This chapter will provide an overview of the key challenges in stream mining algorithms which arise from the unique setup in which these problems are encountered. This chapter is organized as follows. In the next section, we will discuss the generic challenges that stream mining poses to a variety of data management and data mining problems. The next section also deals with several issues which arise in the context of data stream management. In Sect. 3, we discuss several mining algorithms on the data stream model. Section 4 discusses various scientific applications of data streams. Section 5 discusses the research directions and conclusions.
Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.
Hamada, Yuki; O'Connor, Ben L.; Orr, Andrew B.; ...
2016-03-26
In this paper, understanding the spatial patterns of ephemeral streams is crucial for understanding how hydrologic processes influence the abundance and distribution of wildlife habitats in desert regions. Available methods for mapping ephemeral streams at the watershed scale typically underestimate the size of channel networks. Although remote sensing is an effective means of collecting data and obtaining information on large, inaccessible areas, conventional techniques for extracting channel features are not sufficient in regions that have small topographic gradients and subtle target-background spectral contrast. By using very high resolution multispectral imagery, we developed a new algorithm that applies landscape information tomore » map ephemeral channels in desert regions of the Southwestern United States where utility-scale solar energy development is occurring. Knowledge about landscape features and structures was integrated into the algorithm using a series of spectral transformation and spatial statistical operations to integrate information about landscape features and structures. The algorithm extracted ephemeral stream channels at a local scale, with the result that approximately 900% more ephemeral streams was identified than what were identified by using the U.S. Geological Survey’s National Hydrography Dataset. The accuracy of the algorithm in detecting channel areas was as high as 92%, and its accuracy in delineating channel center lines was 91% when compared to a subset of channel networks that were digitized by using the very high resolution imagery. Although the algorithm captured stream channels in desert landscapes across various channel sizes and forms, it often underestimated stream headwaters and channels obscured by bright soils and sparse vegetation. While further improvement is warranted, the algorithm provides an effective means of obtaining detailed information about ephemeral streams, and it could make a significant contribution toward improving the hydrological modelling of desert environments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamada, Yuki; O'Connor, Ben L.; Orr, Andrew B.
In this paper, understanding the spatial patterns of ephemeral streams is crucial for understanding how hydrologic processes influence the abundance and distribution of wildlife habitats in desert regions. Available methods for mapping ephemeral streams at the watershed scale typically underestimate the size of channel networks. Although remote sensing is an effective means of collecting data and obtaining information on large, inaccessible areas, conventional techniques for extracting channel features are not sufficient in regions that have small topographic gradients and subtle target-background spectral contrast. By using very high resolution multispectral imagery, we developed a new algorithm that applies landscape information tomore » map ephemeral channels in desert regions of the Southwestern United States where utility-scale solar energy development is occurring. Knowledge about landscape features and structures was integrated into the algorithm using a series of spectral transformation and spatial statistical operations to integrate information about landscape features and structures. The algorithm extracted ephemeral stream channels at a local scale, with the result that approximately 900% more ephemeral streams was identified than what were identified by using the U.S. Geological Survey’s National Hydrography Dataset. The accuracy of the algorithm in detecting channel areas was as high as 92%, and its accuracy in delineating channel center lines was 91% when compared to a subset of channel networks that were digitized by using the very high resolution imagery. Although the algorithm captured stream channels in desert landscapes across various channel sizes and forms, it often underestimated stream headwaters and channels obscured by bright soils and sparse vegetation. While further improvement is warranted, the algorithm provides an effective means of obtaining detailed information about ephemeral streams, and it could make a significant contribution toward improving the hydrological modelling of desert environments.« less
Effect of Fourier transform on the streaming in quantum lattice gas algorithms
NASA Astrophysics Data System (ADS)
Oganesov, Armen; Vahala, George; Vahala, Linda; Soe, Min
2018-04-01
All our previous quantum lattice gas algorithms for nonlinear physics have approximated the kinetic energy operator by streaming sequences to neighboring lattice sites. Here, the kinetic energy can be treated to all orders by Fourier transforming the kinetic energy operator with interlaced Dirac-based unitary collision operators. Benchmarking against exact solutions for the 1D nonlinear Schrodinger equation shows an extended range of parameters (soliton speeds and amplitudes) over the Dirac-based near-lattice-site streaming quantum algorithm.
Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G; Khanna, Sanjeev
2017-06-01
Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.
Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G.; Khanna, Sanjeev
2017-01-01
Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings. PMID:29151821
Computing Strongly Connected Components in the Streaming Model
NASA Astrophysics Data System (ADS)
Laura, Luigi; Santaroni, Federico
In this paper we present the first algorithm to compute the Strongly Connected Components of a graph in the datastream model (W-Stream), where the graph is represented by a stream of edges and we are allowed to produce intermediate output streams. The algorithm is simple, effective, and can be implemented with few lines of code: it looks at each edge in the stream, and selects the appropriate action with respect to a tree T, representing the graph connectivity seen so far. We analyze the theoretical properties of the algorithm: correctness, memory occupation (O(n logn)), per item processing time (bounded by the current height of T), and number of passes (bounded by the maximal height of T). We conclude by presenting a brief experimental evaluation of the algorithm against massive synthetic and real graphs that confirms its effectiveness: with graphs with up to 100M nodes and 4G edges, only few passes are needed, and millions of edges per second are processed.
The life-cycle of upper-tropospheric jet streams identified with a novel data segmentation algorithm
NASA Astrophysics Data System (ADS)
Limbach, S.; Schömer, E.; Wernli, H.
2010-09-01
Jet streams are prominent features of the upper-tropospheric atmospheric flow. Through the thermal wind relationship these regions with intense horizontal wind speed (typically larger than 30 m/s) are associated with pronounced baroclinicity, i.e., with regions where extratropical cyclones develop due to baroclinic instability processes. Individual jet streams are non-stationary elongated features that can extend over more than 2000 km in the along-flow and 200-500 km in the across-flow direction, respectively. Their lifetime can vary between a few days and several weeks. In recent years, feature-based algorithms have been developed that allow compiling synoptic climatologies and typologies of upper-tropospheric jet streams based upon objective selection criteria and climatological reanalysis datasets. In this study a novel algorithm to efficiently identify jet streams using an extended region-growing segmentation approach is introduced. This algorithm iterates over a 4-dimensional field of horizontal wind speed from ECMWF analyses and decides at each grid point whether all prerequisites for a jet stream are met. In a single pass the algorithm keeps track of all adjacencies of these grid points and creates the 4-dimensional connected segments associated with each jet stream. In addition to the detection of these sets of connected grid points, the algorithm analyzes the development over time of the distinct 3-dimensional features each segment consists of. Important events in the development of these features, for example mergings and splittings, are detected and analyzed on a per-grid-point and per-feature basis. The output of the algorithm consists of the actual sets of grid-points augmented with information about the particular events, and of the so-called event graphs, which are an abstract representation of the distinct 3-dimensional features and events of each segment. This technique provides comprehensive information about the frequency of upper-tropospheric jet streams, their preferred regions of genesis, merging, splitting, and lysis, and statistical information about their size, amplitude and lifetime. The presentation will introduce the technique, provide example visualizations of the time evolution of the identified 3-dimensional jet stream features, and present results from a first multi-month "climatology" of upper-tropospheric jets. In the future, the technique can be applied to longer datasets, for instance reanalyses and output from global climate model simulations - and provide detailed information about key characteristics of jet stream life cycles.
A real time sorting algorithm to time sort any deterministic time disordered data stream
NASA Astrophysics Data System (ADS)
Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.
2017-12-01
In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.
Stream Clustering of Growing Objects
NASA Astrophysics Data System (ADS)
Siddiqui, Zaigham Faraz; Spiliopoulou, Myra
We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.
2015-01-01
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112
Streaming data analytics via message passing with application to graph algorithms
Plimpton, Steven J.; Shead, Tim
2014-05-06
The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2011-03-01
Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
A new simple technique for improving the random properties of chaos-based cryptosystems
NASA Astrophysics Data System (ADS)
Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.
2018-03-01
A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.
Learning accurate very fast decision trees from uncertain data streams
NASA Astrophysics Data System (ADS)
Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo
2015-12-01
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.
Ordered Backward XPath Axis Processing against XML Streams
NASA Astrophysics Data System (ADS)
Nizar M., Abdul; Kumar, P. Sreenivasa
Processing of backward XPath axes against XML streams is challenging for two reasons: (i) Data is not cached for future access. (ii) Query contains steps specifying navigation to the data that already passed by. While there are some attempts to process parent and ancestor axes, there are very few proposals to process ordered backward axes namely, preceding and preceding-sibling. For ordered backward axis processing, the algorithm, in addition to overcoming the limitations on data availability, has to take care of ordering constraints imposed by these axes. In this paper, we show how backward ordered axes can be effectively represented using forward constraints. We then discuss an algorithm for XML stream processing of XPath expressions containing ordered backward axes. The algorithm uses a layered cache structure to systematically accumulate query results. Our experiments show that the new algorithm gains remarkable speed up over the existing algorithm without compromising on bufferspace requirement.
CIFAR10-DVS: An Event-Stream Dataset for Object Classification
Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping
2017-01-01
Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as “CIFAR10-DVS.” The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification. PMID:28611582
CIFAR10-DVS: An Event-Stream Dataset for Object Classification.
Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping
2017-01-01
Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as "CIFAR10-DVS." The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification.
Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination
2012-01-01
Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725
NASA Astrophysics Data System (ADS)
Brenden, T. O.; Clark, R. D.; Wiley, M. J.; Seelbach, P. W.; Wang, L.
2005-05-01
Remote sensing and geographic information systems have made it possible to attribute variables for streams at increasingly detailed resolutions (e.g., individual river reaches). Nevertheless, management decisions still must be made at large scales because land and stream managers typically lack sufficient resources to manage on an individual reach basis. Managers thus require a method for identifying stream management units that are ecologically similar and that can be expected to respond similarly to management decisions. We have developed a spatially-constrained clustering algorithm that can merge neighboring river reaches with similar ecological characteristics into larger management units. The clustering algorithm is based on the Cluster Affinity Search Technique (CAST), which was developed for clustering gene expression data. Inputs to the clustering algorithm are the neighbor relationships of the reaches that comprise the digital river network, the ecological attributes of the reaches, and an affinity value, which identifies the minimum similarity for merging river reaches. In this presentation, we describe the clustering algorithm in greater detail and contrast its use with other methods (expert opinion, classification approach, regular clustering) for identifying management units using several Michigan watersheds as a backdrop.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Multivariate Spatial Condition Mapping Using Subtractive Fuzzy Cluster Means
Sabit, Hakilo; Al-Anbuky, Adnan
2014-01-01
Wireless sensor networks are usually deployed for monitoring given physical phenomena taking place in a specific space and over a specific duration of time. The spatio-temporal distribution of these phenomena often correlates to certain physical events. To appropriately characterise these events-phenomena relationships over a given space for a given time frame, we require continuous monitoring of the conditions. WSNs are perfectly suited for these tasks, due to their inherent robustness. This paper presents a subtractive fuzzy cluster means algorithm and its application in data stream mining for wireless sensor systems over a cloud-computing-like architecture, which we call sensor cloud data stream mining. Benchmarking on standard mining algorithms, the k-means and the FCM algorithms, we have demonstrated that the subtractive fuzzy cluster means model can perform high quality distributed data stream mining tasks comparable to centralised data stream mining. PMID:25313495
NASA Technical Reports Server (NTRS)
Ioup, G. E.
1985-01-01
Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoiber, Marcus H.; Brown, James B.
This software implements the first base caller for nanopore data that calls bases directly from raw data. The basecRAWller algorithm has two major advantages over current nanopore base calling software: (1) streaming base calling and (2) base calling from information rich raw signal. The ability to perform truly streaming base calling as signal is received from the sequencer can be very powerful as this is one of the major advantages of this technology as compared to other sequencing technologies. As such enabling as much streaming potential as possible will be incredibly important as this technology continues to become more widelymore » applied in biosciences. All other base callers currently employ the Viterbi algorithm which requires the whole sequence to employ the complete base calling procedure and thus precludes a natural streaming base calling procedure. The other major advantage of the basecRAWller algorithm is the prediction of bases from raw signal which contains much richer information than the segmented chunks that current algorithms employ. This leads to the potential for much more accurate base calls which would make this technology much more valuable to all of the growing user base for this technology.« less
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
NASA Technical Reports Server (NTRS)
Kato, S.; Smith, G. L.; Barker, H. W.
2001-01-01
An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.
Harte, P.T.; Mack, Thomas J.
1992-01-01
Hydrogeologic data collected since 1990 were assessed and a ground-water-flow model was refined in this study of the Milford-Souhegan glacial-drift aquifer in Milford, New Hampshire. The hydrogeologic data collected were used to refine estimates of hydraulic conductivity and saturated thickness of the aquifer, which were previously calculated during 1988-90. In October 1990, water levels were measured at 124 wells and piezometers, and at 45 stream-seepage sites on the main stem of the Souhegan River, and on small tributary streams overlying the aquifer to improve an understanding of ground-water-flow patterns and stream-seepage gains and losses. Refinement of the ground-water-flow model included a reduction in the number of active cells in layer 2 in the central part of the aquifer, a revision of simulated hydraulic conductivity in model layers 2 and representing the aquifer, incorporation of a new block-centered finite-difference ground-water-flow model, and incorporation of a new solution algorithm and solver (a preconditioned conjugate-gradient algorithm). Refinements to the model resulted in decreases in the difference between calculated and measured heads at 22 wells. The distribution of gains and losses of stream seepage calculated in simulation with the refined model is similar to that calculated in the previous model simulation. The contributing area to the Savage well, under average pumping conditions, decreased by 0.021 square miles from the area calculated in the previous model simulation. The small difference in the contrib- uting recharge area indicates that the additional data did not enhance model simulation and that the conceptual framework for the previous model is accurate.
Brakebill, J.W.; Preston, S.D.
2003-01-01
The U.S. Geological Survey has developed a methodology for statistically relating nutrient sources and land-surface characteristics to nutrient loads of streams. The methodology is referred to as SPAtially Referenced Regressions On Watershed attributes (SPARROW), and relates measured stream nutrient loads to nutrient sources using nonlinear statistical regression models. A spatially detailed digital hydrologic network of stream reaches, stream-reach characteristics such as mean streamflow, water velocity, reach length, and travel time, and their associated watersheds supports the regression models. This network serves as the primary framework for spatially referencing potential nutrient source information such as atmospheric deposition, septic systems, point-sources, land use, land cover, and agricultural sources and land-surface characteristics such as land use, land cover, average-annual precipitation and temperature, slope, and soil permeability. In the Chesapeake Bay watershed that covers parts of Delaware, Maryland, Pennsylvania, New York, Virginia, West Virginia, and Washington D.C., SPARROW was used to generate models estimating loads of total nitrogen and total phosphorus representing 1987 and 1992 land-surface conditions. The 1987 models used a hydrologic network derived from an enhanced version of the U.S. Environmental Protection Agency's digital River Reach File, and course resolution Digital Elevation Models (DEMs). A new hydrologic network was created to support the 1992 models by generating stream reaches representing surface-water pathways defined by flow direction and flow accumulation algorithms from higher resolution DEMs. On a reach-by-reach basis, stream reach characteristics essential to the modeling were transferred to the newly generated pathways or reaches from the enhanced River Reach File used to support the 1987 models. To complete the new network, watersheds for each reach were generated using the direction of surface-water flow derived from the DEMs. This network improves upon existing digital stream data by increasing the level of spatial detail and providing consistency between the reach locations and topography. The hydrologic network also aids in illustrating the spatial patterns of predicted nutrient loads and sources contributed locally to each stream, and the percentages of nutrient load that reach Chesapeake Bay.
Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.
Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao
2016-06-01
Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.
Privacy Preserving Sequential Pattern Mining in Data Stream
NASA Astrophysics Data System (ADS)
Huang, Qin-Hua
The privacy preserving data mining technique researches have gained much attention in recent years. For data stream systems, wireless networks and mobile devices, the related stream data mining techniques research is still in its' early stage. In this paper, an data mining algorithm dealing with privacy preserving problem in data stream is presented.
STREAMFINDER II: A possible fanning structure parallel to the GD-1 stream in Pan-STARRS1
NASA Astrophysics Data System (ADS)
Malhan, Khyati; Ibata, Rodrigo A.; Goldman, Bertrand; Martin, Nicolas F.; Magnier, Eugene; Chambers, Kenneth
2018-05-01
STREAMFINDER is a new algorithm that we have built to detect stellar streams in an automated and systematic way in astrophysical datasets that possess any combination of positional and kinematic information. In Paper I, we introduced the methodology and the workings of our algorithm and showed that it is capable of detecting ultra-faint and distant halo stream structures containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia dataset. Here, we test the method with real proper motion data from the Pan-STARRS1 survey, and by selecting targets down to r0 = 18.5 mag we show that it is able to detect the GD-1 stellar stream, whereas the structure remains below a useful detection limit when using a Matched Filter technique. The radial velocity solutions provided by STREAMFINDER for GD-1 candidate members are found to be in good agreement with observations. Furthermore, our algorithm detects a ˜ {40}° long structure approximately parallel to GD-1, and which fans out from it, possibly a sign of stream-fanning due to the triaxiality of the Galactic potential. This analysis shows the promise of this method for detecting and analysing stellar streams in the upcoming Gaia DR2 catalogue.
A Fast-Time Simulation Environment for Airborne Merging and Spacing Research
NASA Technical Reports Server (NTRS)
Bussink, Frank J. L.; Doble, Nathan A.; Barmore, Bryan E.; Singer, Sharon
2005-01-01
As part of NASA's Distributed Air/Ground Traffic Management (DAG-TM) effort, NASA Langley Research Center is developing concepts and algorithms for merging multiple aircraft arrival streams and precisely spacing aircraft over the runway threshold. An airborne tool has been created for this purpose, called Airborne Merging and Spacing for Terminal Arrivals (AMSTAR). To evaluate the performance of AMSTAR and complement human-in-the-loop experiments, a simulation environment has been developed that enables fast-time studies of AMSTAR operations. The environment is based on TMX, a multiple aircraft desktop simulation program created by the Netherlands National Aerospace Laboratory (NLR). This paper reviews the AMSTAR concept, discusses the integration of the AMSTAR algorithm into TMX and the enhancements added to TMX to support fast-time AMSTAR studies, and presents initial simulation results.
Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels
NASA Astrophysics Data System (ADS)
Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang
In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-09-12
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-01-01
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs. PMID:27626429
Sharp, G C; Kandasamy, N; Singh, H; Folkert, M
2007-10-07
This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup--up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration.
A Streaming Language Implementation of the Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Barth, Timothy; Knight, Timothy
2005-01-01
We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.
Extending the FairRoot framework to allow for simulation and reconstruction of free streaming data
NASA Astrophysics Data System (ADS)
Al-Turany, M.; Klein, D.; Manafov, A.; Rybalchenko, A.; Uhlig, F.
2014-06-01
The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework is designed to optimise the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments. As a first step toward simulation of free streaming data, the time based simulation was introduced to the framework. The next step is the event source simulation. This is achieved via a client server system. After digitization the so called "samplers" can be started, where sampler can read the data of the corresponding detector from the simulation files and make it available for the reconstruction clients. The system makes it possible to develop and validate the online reconstruction algorithms. In this work, the design and implementation of the new architecture and the communication layer will be described.
Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics
NASA Astrophysics Data System (ADS)
Iyer, V.; Shetty, S.; Iyengar, S. S.
2015-07-01
Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.
Overview of implementation of DARPA GPU program in SAIC
NASA Astrophysics Data System (ADS)
Braunreiter, Dennis; Furtek, Jeremy; Chen, Hai-Wen; Healy, Dennis
2008-04-01
This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems. The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA) and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.
Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.
Chandrasekaran, Jeyamala; Thiruvengadam, S J
2015-01-01
Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.
Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption
Chandrasekaran, Jeyamala; Thiruvengadam, S. J.
2015-01-01
Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
NASA Technical Reports Server (NTRS)
Kleinman, Leonid S.; Red, X. B., Jr.
1995-01-01
An algorithm has been developed for time-dependent forced convective diffusion-reaction having convection by a recirculating flow field within the drop that is hydrodynamically coupled at the interface with a convective external flow field that at infinity becomes a uniform free-streaming flow. The concentration field inside the droplet is likewise coupled with that outside by boundary conditions at the interface. A chemical reaction can take place either inside or outside the droplet, or reactions can take place in both phases. The algorithm has been implemented, and for comparison results are shown here for the case of no reaction in either phase and for the case of an external first order reaction, both for unsteady behavior. For pure interphase mass transfer, concentration isocontours, local and average Sherwood numbers, and average droplet concentrations have been obtained as a function of the physical properties and external flow field. For mass transfer enhanced by an external reaction, in addition to the above forms of results, we present the enhancement factor, with the results now also depending upon the (dimensionless) rate of reaction.
NASA Technical Reports Server (NTRS)
Kleinman, Leonid S.; Reed, X. B., Jr.
1995-01-01
An algorithm has been developed for the forced convective diffusion-reaction problem for convection inside and outside a droplet by a recirculating flow field hydrodynamically coupled at the droplet interface with an external flow field that at infinity becomes a uniform streaming flow. The concentration field inside the droplet is likewise coupled with that outside by boundary conditions at the interface. A chemical reaction can take place either inside or outside the droplet or reactions can take place in both phases. The algorithm has been implemented and results are shown here for the case of no reaction and for the case of an external first order reaction, both for unsteady behavior. For pure interphase mass transfer, concentration isocontours, local and average Sherwood numbers, and average droplet concentrations have been obtained as a function of the physical properties and external flow field. For mass transfer enhanced by an external reaction, in addition to the above forms of results, we present the enhancement factor, with the results now also depending upon the (dimensionless) rate of reaction.
The standard WASP7 stream transport model calculates water flow through a branching stream network that may include both free-flowing and ponded segments. This supplemental user manual documents the hydraulic algorithms, including the transport and hydrogeometry equations, the m...
Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Milos Manic
The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, thismore » paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.« less
Remembering the Important Things: Semantic Importance in Stream Reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Rui; Greaves, Mark T.; Smith, William P.
Reasoning and querying over data streams rely on the abil- ity to deliver a sequence of stream snapshots to the processing algo- rithms. These snapshots are typically provided using windows as views into streams and associated window management strategies. Generally, the goal of any window management strategy is to preserve the most im- portant data in the current window and preferentially evict the rest, so that the retained data can continue to be exploited. A simple timestamp- based strategy is rst-in-rst-out (FIFO), in which items are replaced in strict order of arrival. All timestamp-based strategies implicitly assume that a temporalmore » ordering reliably re ects importance to the processing task at hand, and thus that window management using timestamps will maximize the ability of the processing algorithms to deliver accurate interpretations of the stream. In this work, we explore a general no- tion of semantic importance that can be used for window management for streams of RDF data using semantically-aware processing algorithms like deduction or semantic query. Semantic importance exploits the infor- mation carried in RDF and surrounding ontologies for ranking window data in terms of its likely contribution to the processing algorithms. We explore the general semantic categories of query contribution, prove- nance, and trustworthiness, as well as the contribution of domain-specic ontologies. We describe how these categories behave using several con- crete examples. Finally, we consider how a stream window management strategy based on semantic importance could improve overall processing performance, especially as available window sizes decrease.« less
Finding Frequent Closed Itemsets in Sliding Window in Linear Time
NASA Astrophysics Data System (ADS)
Chen, Junbo; Zhou, Bo; Chen, Lu; Wang, Xinyu; Ding, Yiqun
One of the most well-studied problems in data mining is computing the collection of frequent itemsets in large transactional databases. Since the introduction of the famous Apriori algorithm [14], many others have been proposed to find the frequent itemsets. Among such algorithms, the approach of mining closed itemsets has raised much interest in data mining community. The algorithms taking this approach include TITANIC [8], CLOSET+[6], DCI-Closed [4], FCI-Stream [3], GC-Tree [15], TGC-Tree [16] etc. Among these algorithms, FCI-Stream, GC-Tree and TGC-Tree are online algorithms work under sliding window environments. By the performance evaluation in [16], GC-Tree [15] is the fastest one. In this paper, an improved algorithm based on GC-Tree is proposed, the computational complexity of which is proved to be a linear combination of the average transaction size and the average closed itemset size. The algorithm is based on the essential theorem presented in Sect. 4.2. Empirically, the new algorithm is several orders of magnitude faster than the state of art algorithm, GC-Tree.
NASA Astrophysics Data System (ADS)
Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.
2018-04-01
This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.
Machine Learning Applied to Dawn/VIR data of Vesta in view of MERTIS/BepiColombo.
NASA Astrophysics Data System (ADS)
Helbert, J.; D'Amore, M.; Le Scaon, R.; Maturilli, A.; Palomba, E.; Longobardo, A.; Hiesinger, H.
2016-12-01
Remote sensing spectroscopy is one of the most commonly used technique in planetary science and for recent instruments producing huge amount of data, classic methods could fails to unlock the full scientific potential buried in the measurements. We explored several Machine Learning techniques: multi-step clustering method is developed, using an image segmentation method, a stream algorithm, and hierarchical clustering. The MErcury Radiometer and Thermal infrared Imaging Spectrometer (MERTIS) is part of the payload of the Mercury Planetary Orbiter spacecraft of the ESA-JAXA BepiColombo mission. MERTIS's scientific goals are to infer rock-forming minerals, to map surface composition, and to study surface temperature variations on Mercury. The NASA mission DAWN carry a suites of instruments aimed at understanding the two most massive objects in the main asteroid belt: Vesta and Ceres. DAWN has already successfully completed the exploration of Vesta in September 2012 and it is now in the last phase of the mission around Ceres. To cope with the stream of data that will be delivered by MERTIS, we developed an algorithm that could aggregate new data as they come in during the mission giving the scientist a guide for the most interesting and new discovery on Mercury. The DAWN/VESTA VIR data is a testbed for the algorithm. The algorithm identified the Olivine outcrops around two craters on Vesta's surface described in Ammannito et al., 2013. We furthermore mimic the data acquisition process as if the mission were dumping the data live. The algorithm provides insightful information on the novelty and classes int he data as they are collected. This will enhance MERTIS targeting and maximize its scientific return during BepiColombo mission at Mercury. E Ammannito et al. "Olivine in an unexpected location on Vesta/'s surface". In: Nature 504.7478 (2013), pp. 122-125.
An efficient reversible privacy-preserving data mining technology over data streams.
Lin, Chen-Yi; Kao, Yuan-Hung; Lee, Wei-Bin; Chen, Rong-Chang
2016-01-01
With the popularity of smart handheld devices and the emergence of cloud computing, users and companies can save various data, which may contain private data, to the cloud. Topics relating to data security have therefore received much attention. This study focuses on data stream environments and uses the concept of a sliding window to design a reversible privacy-preserving technology to process continuous data in real time, known as a continuous reversible privacy-preserving (CRP) algorithm. Data with CRP algorithm protection can be accurately recovered through a data recovery process. In addition, by using an embedded watermark, the integrity of the data can be verified. The results from the experiments show that, compared to existing algorithms, CRP is better at preserving knowledge and is more effective in terms of reducing information loss and privacy disclosure risk. In addition, it takes far less time for CRP to process continuous data than existing algorithms. As a result, CRP is confirmed as suitable for data stream environments and fulfills the requirements of being lightweight and energy-efficient for smart handheld devices.
A MULTI-STREAM MODEL FOR VERTICAL MIXING OF A PASSIVE TRACER IN THE CONVECTIVE BOUNDARY LAYER
We study a multi-stream model (MSM) for vertical mixing of a passive tracer in the convective boundary layer, in which the tracer is advected by many vertical streams with different probabilities and diffused by small scale turbulence. We test the MSM algorithm for investigatin...
Evaporative cooling enhanced cold storage system
Carr, Peter
1991-01-01
The invention provides an evaporatively enhanced cold storage system wherein a warm air stream is cooled and the cooled air stream is thereafter passed into contact with a cold storage unit. Moisture is added to the cooled air stream prior to or during contact of the cooled air stream with the cold storage unit to effect enhanced cooling of the cold storage unit due to evaporation of all or a portion of the added moisture. Preferably at least a portion of the added moisture comprises water condensed during the cooling of the warm air stream.
Evaporative cooling enhanced cold storage system
Carr, P.
1991-10-15
The invention provides an evaporatively enhanced cold storage system wherein a warm air stream is cooled and the cooled air stream is thereafter passed into contact with a cold storage unit. Moisture is added to the cooled air stream prior to or during contact of the cooled air stream with the cold storage unit to effect enhanced cooling of the cold storage unit due to evaporation of all or a portion of the added moisture. Preferably at least a portion of the added moisture comprises water condensed during the cooling of the warm air stream. 3 figures.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
Classifying Imbalanced Data Streams via Dynamic Feature Group Weighting with Importance Sampling.
Wu, Ke; Edwards, Andrea; Fan, Wei; Gao, Jing; Zhang, Kun
2014-04-01
Data stream classification and imbalanced data learning are two important areas of data mining research. Each has been well studied to date with many interesting algorithms developed. However, only a few approaches reported in literature address the intersection of these two fields due to their complex interplay. In this work, we proposed an importance sampling driven, dynamic feature group weighting framework (DFGW-IS) for classifying data streams of imbalanced distribution. Two components are tightly incorporated into the proposed approach to address the intrinsic characteristics of concept-drifting, imbalanced streaming data. Specifically, the ever-evolving concepts are tackled by a weighted ensemble trained on a set of feature groups with each sub-classifier (i.e. a single classifier or an ensemble) weighed by its discriminative power and stable level. The un-even class distribution, on the other hand, is typically battled by the sub-classifier built in a specific feature group with the underlying distribution rebalanced by the importance sampling technique. We derived the theoretical upper bound for the generalization error of the proposed algorithm. We also studied the empirical performance of our method on a set of benchmark synthetic and real world data, and significant improvement has been achieved over the competing algorithms in terms of standard evaluation metrics and parallel running time. Algorithm implementations and datasets are available upon request.
Real-Time Joint Streaming Data Processing from Social and Physical Sensors
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.
2014-12-01
The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction with the development of innovative computing algorithms, when combined with sensor data can provide a new paradigm for real-time earthquake detection in order to facilitate rapid and inexpensive natural risk reduction.
A novel image encryption algorithm based on chaos maps with Markov properties
NASA Astrophysics Data System (ADS)
Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang
2015-02-01
In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.
Robust media processing on programmable power-constrained systems
NASA Astrophysics Data System (ADS)
McVeigh, Jeff
2005-03-01
To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.
NASA Astrophysics Data System (ADS)
Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun
2018-03-01
In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.
NASA Technical Reports Server (NTRS)
Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.
2010-01-01
The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods
NASA Astrophysics Data System (ADS)
Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.
2018-06-01
A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.
The inverse of winnowing: a FORTRAN subroutine and discussion of unwinnowing discrete data
Bracken, Robert E.
2004-01-01
This report describes an unwinnowing algorithm that utilizes a discrete Fourier transform, and a resulting Fortran subroutine that winnows or unwinnows a 1-dimensional stream of discrete data; the source code is included. The unwinnowing algorithm effectively increases (by integral factors) the number of available data points while maintaining the original frequency spectrum of a data stream. This has utility when an increased data density is required together with an availability of higher order derivatives that honor the original data.
An algorithm to extract more accurate stream longitudinal profiles from unfilled DEMs
NASA Astrophysics Data System (ADS)
Byun, Jongmin; Seong, Yeong Bae
2015-08-01
Morphometric features observed from a stream longitudinal profile (SLP) reflect channel responses to lithological variation and changes in uplift or climate; therefore, they constitute essential indicators in the studies for the dynamics between tectonics, climate, and surface processes. The widespread availability of digital elevation models (DEMs) and their processing enable semi-automatic extraction of SLPs as well as additional stream profile parameters, thus reducing the time spent for extracting them and simultaneously allowing regional-scale studies of SLPs. However, careful consideration is required to extract SLPs directly from a DEM, because the DEM must be altered by depression filling process to ensure the continuity of flows across it. Such alteration inevitably introduces distortions to the SLP, such as stair steps, bias of elevation values, and inaccurate stream paths. This paper proposes a new algorithm, called maximum depth tracing algorithm (MDTA), to extract more accurate SLPs using depression-unfilled DEMs. The MDTA supposes that depressions in DEMs are not necessarily artifacts to be removed, and that elevation values within them are useful to represent more accurately the real landscape. To ensure the continuity of flows even across the unfilled DEM, the MDTA first determines the outlet of each depression and then reverses flow directions of the cells on the line of maximum depth within each depression, beginning from the outlet and toward the sink. It also calculates flow accumulation without disruption across the unfilled DEM. Comparative analysis with the profiles extracted by the hydrologic functions implemented in the ArcGIS™ was performed to illustrate the benefits from the MDTA. It shows that the MDTA provides more accurate stream paths on depression areas, and consequently reduces distortions of the SLPs derived from the paths, such as exaggerated elevation values and negatively biased slopes that are commonly observed in the SLPs built using the ArcGIS™. The algorithm proposed here, therefore, could aid all the studies requiring more reliable stream paths and SLPs from DEMs.
New method for predicting estrogen receptor status utilizing breast MRI texture kinetic analysis
NASA Astrophysics Data System (ADS)
Chaudhury, Baishali; Hall, Lawrence O.; Goldgof, Dmitry B.; Gatenby, Robert A.; Gillies, Robert; Drukteinis, Jennifer S.
2014-03-01
Magnetic Resonance Imaging (MRI) of breast cancer typically shows that tumors are heterogeneous with spatial variations in blood flow and cell density. Here, we examine the potential link between clinical tumor imaging and the underlying evolutionary dynamics behind heterogeneity in the cellular expression of estrogen receptors (ER) in breast cancer. We assume, in an evolutionary environment, that ER expression will only occur in the presence of significant concentrations of estrogen, which is delivered via the blood stream. Thus, we hypothesize, the expression of ER in breast cancer cells will correlate with blood flow on gadolinium enhanced breast MRI. To test this hypothesis, we performed quantitative analysis of blood flow on dynamic contrast enhanced MRI (DCE-MRI) and correlated it with the ER status of the tumor. Here we present our analytic methods, which utilize a novel algorithm to analyze 20 volumetric DCE-MRI breast cancer tumors. The algorithm generates post initial enhancement (PIE) maps from DCE-MRI and then performs texture features extraction from the PIE map, feature selection, and finally classification of tumors into ER positive and ER negative status. The combined gray level co-occurrence matrices, gray level run length matrices and local binary pattern histogram features allow quantification of breast tumor heterogeneity. The algorithm predicted ER expression with an accuracy of 85% using a Naive Bayes classifier in leave-one-out cross-validation. Hence, we conclude that our data supports the hypothesis that imaging characteristics can, through application of evolutionary principles, provide insights into the cellular and molecular properties of cancer cells.
NASA Astrophysics Data System (ADS)
Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan
2016-07-01
The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
Analysis of a non-storm time enhancement in outer belt electrons
NASA Astrophysics Data System (ADS)
Schiller, Q.; Li, X.; Godinez, H. C.; Sarris, T. E.; Tu, W.; Malaspina, D.; Turner, D. L.; Blake, J. B.; Koller, J.
2014-12-01
A high-speed solar wind stream impacted Earth's magnetosphere on January 13th, 2013, and is associated with a large enhancement (>2.5 orders) of outer radiation belt electron fluxes despite a small Dst signature (-30 nT). Fortunately, the outer belt was well sampled by a variety of missions during the event, including the Van Allen Probes, THEMIS, and the Colorado Student Space Weather Experiment (CSSWE). In-situ flux and phase space density observations are used from MagEIS (Magnetic Electron Ion Spectrometer) onboard the Van Allen Probes, REPTile (Relativistic Electron and Proton Telescope integrated little experiment) onboard CSSWE, and SST onboard THEMIS. The observations show a rapid increase in 100's keV electron fluxes, followed by a more gradual enhancement of the MeV energies. The 100's keV enhancement is associated with a substorm injection, and the futher energization to MeV energies is associated with wave activity as measured by the Van Allen Probes and THEMIS. Furthermore, the phase space density radial profiles show an acceleration region occurring between 5
ENHANCED STREAM WATER QUALITY MODEL (QUAL2EU)
The enhanced stream water quality model QUAL2E and QUAL2E-UNCAS (37) permits simulation of several water quality constituents in a branching stream system using a finite difference solution to the one-dimensional advective-dispersive mass transport and reaction equation. The con...
Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks
Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire
2009-01-01
This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145
Tile prediction schemes for wide area motion imagery maps in GIS
NASA Astrophysics Data System (ADS)
Michael, Chris J.; Lin, Bruce Y.
2017-11-01
Wide-area surveillance, traffic monitoring, and emergency management are just several of many applications benefiting from the incorporation of Wide-Area Motion Imagery (WAMI) maps into geographic information systems. Though the use of motion imagery as a GIS base map via the Web Map Service (WMS) standard is not a new concept, effectively streaming imagery is particularly challenging due to its large scale and the multidimensionally interactive nature of clients that use WMS. Ineffective streaming from a server to one or more clients can unnecessarily overwhelm network bandwidth and cause frustratingly large amounts of latency in visualization to the user. Seamlessly streaming WAMI through GIS requires good prediction to accurately guess the tiles of the video that will be traversed in the near future. In this study, we present an experimental framework for such prediction schemes by presenting a stochastic interaction model that represents a human user's interaction with a GIS video map. We then propose several algorithms by which the tiles of the stream may be predicted. Results collected both within the experimental framework and using human analyst trajectories show that, though each algorithm thrives under certain constraints, the novel Markovian algorithm yields the best results overall. Furthermore, we make the argument that the proposed experimental framework is sufficient for the study of these prediction schemes.
Contemplating Synergistic Algorithms for the NASA ACE Mission
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Starr, David O.; Marchand, Roger; Ackerman, Steven A.; Platnick, Steven E.; Fridlind, Ann; Cooper, Steven; Vane, Deborah G.; Stephens, Graeme L.
2013-01-01
ACE is a proposed Tier 2 NASA Decadal Survey mission that will focus on clouds, aerosols, and precipitation as well as ocean ecosystems. The primary objective of the clouds component of this mission is to advance our ability to predict changes to the Earth's hydrological cycle and energy balance in response to climate forcings by generating observational constraints on future science questions, especially those associated with the effects of aerosol on clouds and precipitation. ACE will continue and extend the measurement heritage that began with the A-Train and that will continue through Earthcare. ACE planning efforts have identified several data streams that can contribute significantly to characterizing the properties of clouds and precipitation and the physical processes that force these properties. These include dual frequency Doppler radar, high spectral resolution lidar, polarimetric visible imagers, passive microwave and submillimeter wave radiometry. While all these data streams are technologically feasible, their total cost is substantial and likely prohibitive. It is, therefore, necessary to critically evaluate their contributions to the ACE science goals. We have begun developing algorithms to explore this trade space. Specifically, we will describe our early exploratory algorithms that take as input the set of potential ACE-like data streams and evaluate critically to what extent each data stream influences the error in a specific cloud quantity retrieval.
Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift
Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael
2015-01-01
The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051
Stream macrophytes are often removed with their sediments to deepen stream channels, stabilize channel banks, or provide habitat for target species. These sediments may support enhanced nitrogen processing. To evaluate sediment nitrogen processing, identify seasonal patterns, and...
Remote Sensing Applications to Water Quality Management in Florida
Increasingly, optical datasets from estuarine and coastal systems are becoming available for remote sensing algorithm development, validation, and application. With validated algorithms, the data streams from satellite sensors can provide unprecedented spatial and temporal data ...
Open-cycle OTEC system performance analysis. [Claude cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, A.A.; Olson, D.A.; Johnson, D.H.
1980-10-01
An algorithm developed to calculate the performance of Claude-Cycle ocean thermal energy conversion (OTEC) systems is described. The algorithm treats each component of the system separately and then interfaces them to form a complete system, allowing a component to be changed without changing the rest of the algorithm. Two components that are subject to change are the evaporator and condenser. For this study we developed mathematical models of a channel-flow evaporator and both a horizontal jet and spray director contact condenser. The algorithm was then programmed to run on SERI's CDC 7600 computer and used to calculate the effect onmore » performance of deaerating the warm and cold water streams before entering the evaporator and condenser, respectively. This study indicates that there is no advantage to removing air from these streams compared with removing the air from the condenser.« less
NASA Astrophysics Data System (ADS)
Lin, Chow-Sing; Yen, Fang-Zhi
With the rapid advances in wireless network communication, multimedia presentation has become more applicable. However, due to the limited wireless network resource and the mobility of Mobile Host (MH), QoS for wireless streaming is much more difficult to maintain. How to decrease Call Dropping Probability (CDP) in multimedia traffic while still keeping acceptable Call Block Probability (CBP) without sacrificing QoS has become an significant issue in providing wireless streaming services. In this paper, we propose a novel Dynamic Resources Adjustment (DRA) algorithm, which can dynamically borrow idle reserved resources in the serving cell or the target cell for handoffing MHs to compensate the shortage of bandwidth in media streaming. The experimental simulation results show that compared with traditional No Reservation (NR), and Resource Reservation in the six neighboring cells (RR-nb), and Resource Reservation in the target cell (RR-t), our proposed DRA algorithm can fully utilize unused reserved resources to effectively decrease the CDP while still keeping acceptable CBP with high bandwidth utilization.
NASA Astrophysics Data System (ADS)
Horstmann, Jan Tobias; Le Garrec, Thomas; Mincu, Daniel-Ciprian; Lévêque, Emmanuel
2017-11-01
Despite the efficiency and low dissipation of the stream-collide scheme of the discrete-velocity Boltzmann equation, which is nowadays implemented in many lattice Boltzmann solvers, a major drawback exists over alternative discretization schemes, i.e. finite-volume or finite-difference, that is the limitation to Cartesian uniform grids. In this paper, an algorithm is presented that combines the positive features of each scheme in a hybrid lattice Boltzmann method. In particular, the node-based streaming of the distribution functions is coupled with a second-order finite-volume discretization of the advection term of the Boltzmann equation under the Bhatnagar-Gross-Krook approximation. The algorithm is established on a multi-domain configuration, with the individual schemes being solved on separate sub-domains and connected by an overlapping interface of at least 2 grid cells. A critical parameter in the coupling is the CFL number equal to unity, which is imposed by the stream-collide algorithm. Nevertheless, a semi-implicit treatment of the collision term in the finite-volume formulation allows us to obtain a stable solution for this condition. The algorithm is validated in the scope of three different test cases on a 2D periodic mesh. It is shown that the accuracy of the combined discretization schemes agrees with the order of each separate scheme involved. The overall numerical error of the hybrid algorithm in the macroscopic quantities is contained between the error of the two individual algorithms. Finally, we demonstrate how such a coupling can be used to adapt to anisotropic flows with some gradual mesh refinement in the FV domain.
Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS)
NASA Astrophysics Data System (ADS)
Daniels, M. D.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.; Martin, C. L.; Maskey, M.; Keiser, K.; Dye, M. J.
2015-12-01
The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project, funded as part of NSF's EarthCube initiative, addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Advances in the distribution of real-time data are leading many new transient phenomena in space-time to be observed, however, real-time decision-making is infeasible in many cases as these streaming data are either completely inaccessible or only available to proprietary in-house tools or displays. This lack of accessibility prohibits advanced algorithm and workflow development that could be initiated or enhanced by these data streams. Small research teams do not have resources to develop tools for the broad dissemination of their valuable real-time data and could benefit from an easy to use, scalable, cloud-based solution to facilitate access. CHORDS proposes to make a very diverse suite of real-time data available to the broader geosciences community in order to allow innovative new science in these areas to thrive. This presentation will highlight recently developed CHORDS portal tools and processing systems aimed at addressing some of the gaps in handling real-time data, particularly in the provisioning of data from the "long-tail" scientific community through a simple interface deployed in the cloud. The CHORDS system will connect these real-time streams via standard services from the Open Geospatial Consortium (OGC) and does so in a way that is simple and transparent to the data provider. Broad use of the CHORDS framework will expand the role of real-time data within the geosciences, and enhance the potential of streaming data sources to enable adaptive experimentation and real-time hypothesis testing. Adherence to community data and metadata standards will promote the integration of CHORDS real-time data with existing standards-compliant analysis, visualization and modeling tools.
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
Enabling Incremental Query Re-Optimization
Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau
2017-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658
Predicting and Detecting Emerging Cyberattack Patterns Using StreamWorks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Choudhury, Sutanay; Feo, John T.
2014-06-30
The number and sophistication of cyberattacks on industries and governments have dramatically grown in recent years. To counter this movement, new advanced tools and techniques are needed to detect cyberattacks in their early stages such that defensive actions may be taken to avert or mitigate potential damage. From a cybersecurity analysis perspective, detecting cyberattacks may be cast as a problem of identifying patterns in computer network traffic. Logically and intuitively, these patterns may take on the form of a directed graph that conveys how an attack or intrusion propagates through the computers of a network. Such cyberattack graphs could providemore » cybersecurity analysts with powerful conceptual representations that are natural to express and analyze. We have been researching and developing graph-centric approaches and algorithms for dynamic cyberattack detection. The advanced dynamic graph algorithms we are developing will be packaged into a streaming network analysis framework known as StreamWorks. With StreamWorks, a scientist or analyst may detect and identify precursor events and patterns as they emerge in complex networks. This analysis framework is intended to be used in a dynamic environment where network data is streamed in and is appended to a large-scale dynamic graph. Specific graphical query patterns are decomposed and collected into a graph query library. The individual decomposed subpatterns in the library are continuously and efficiently matched against the dynamic graph as it evolves to identify and detect early, partial subgraph patterns. The scalable emerging subgraph pattern algorithms will match on both structural and semantic network properties.« less
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
Comparing Models and Methods for the Delineation of Stream Baseflow Contribution Areas
NASA Astrophysics Data System (ADS)
Chow, R.; Frind, M.; Frind, E. O.; Jones, J. P.; Sousa, M.; Rudolph, D. L.; Nowak, W.
2016-12-01
This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to parameter non-uniqueness, discretization schemes, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternate approach that provides probability intervals for the baseflow contribution areas. In situations where the two approaches agree, the confidence in the delineation is reinforced.
NASA Astrophysics Data System (ADS)
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Lin, C. T.
1989-01-01
The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.
Design issues and caching strategies for CD-ROM-based multimedia storage
NASA Astrophysics Data System (ADS)
Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari
1996-03-01
CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.
New Splitting Criteria for Decision Trees in Stationary Data Streams.
Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Rutkowski, Leszek; Duda, Piotr; Jaworski, Maciej
2018-06-01
The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality. Therefore, there is an urgent need to develop new algorithms, which are both mathematically justified and characterized by good performance. In this paper, we address this problem by developing a family of new splitting criteria for classification in stationary data streams and investigating their probabilistic properties. The new criteria, derived using appropriate statistical tools, are based on the misclassification error and the Gini index impurity measures. The general division of splitting criteria into two types is proposed. Attributes chosen based on type- splitting criteria guarantee, with high probability, the highest expected value of split measure. Type- criteria ensure that the chosen attribute is the same, with high probability, as it would be chosen based on the whole infinite data stream. Moreover, in this paper, two hybrid splitting criteria are proposed, which are the combinations of single criteria based on the misclassification error and Gini index.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua
2017-12-01
Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.
Handling Dynamic Weights in Weighted Frequent Pattern Mining
NASA Astrophysics Data System (ADS)
Ahmed, Chowdhury Farhan; Tanbeer, Syed Khairuzzaman; Jeong, Byeong-Soo; Lee, Young-Koo
Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.
NASA Astrophysics Data System (ADS)
Hagemann, M. W.; Gleason, C. J.; Durand, M. T.
2017-11-01
The forthcoming Surface Water and Ocean Topography (SWOT) NASA satellite mission will measure water surface width, height, and slope of major rivers worldwide. The resulting data could provide an unprecedented account of river discharge at continental scales, but reliable methods need to be identified prior to launch. Here we present a novel algorithm for discharge estimation from only remotely sensed stream width, slope, and height at multiple locations along a mass-conserved river segment. The algorithm, termed the Bayesian AMHG-Manning (BAM) algorithm, implements a Bayesian formulation of streamflow uncertainty using a combination of Manning's equation and at-many-stations hydraulic geometry (AMHG). Bayesian methods provide a statistically defensible approach to generating discharge estimates in a physically underconstrained system but rely on prior distributions that quantify the a priori uncertainty of unknown quantities including discharge and hydraulic equation parameters. These were obtained from literature-reported values and from a USGS data set of acoustic Doppler current profiler (ADCP) measurements at USGS stream gauges. A data set of simulated widths, slopes, and heights from 19 rivers was used to evaluate the algorithms using a set of performance metrics. Results across the 19 rivers indicate an improvement in performance of BAM over previously tested methods and highlight a path forward in solving discharge estimation using solely satellite remote sensing.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Scheduling optimization of design stream line for production research and development projects
NASA Astrophysics Data System (ADS)
Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming
2017-05-01
In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
ERIC Educational Resources Information Center
Tataw, Oben Moses
2013-01-01
Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…
Implementation of Dynamic Extensible Adaptive Locally Exchangeable Measures (IDEALEM) v 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sim, Alex; Lee, Dongeun; Wu, K. John
2016-03-04
Handling large streaming data is essential for various applications such as network traffic analysis, social networks, energy cost trends, and environment modeling. However, it is in general intractable to store, compute, search, and retrieve large streaming data. This software addresses a fundamental issue, which is to reduce the size of large streaming data and still obtain accurate statistical analysis. As an example, when a high-speed network such as 100 Gbps network is monitored, the collected measurement data rapidly grows so that polynomial time algorithms (e.g., Gaussian processes) become intractable. One possible solution to reduce the storage of vast amounts ofmore » measured data is to store a random sample, such as one out of 1000 network packets. However, such static sampling methods (linear sampling) have drawbacks: (1) it is not scalable for high-rate streaming data, and (2) there is no guarantee of reflecting the underlying distribution. In this software, we implemented a dynamic sampling algorithm, based on the recent technology from the relational dynamic bayesian online locally exchangeable measures, that reduces the storage of data records in a large scale, and still provides accurate analysis of large streaming data. The software can be used for both online and offline data records.« less
In-camera video-stream processing for bandwidth reduction in web inspection
NASA Astrophysics Data System (ADS)
Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.
1996-02-01
Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.
Zhao, De-Zhi; Shi, Chuan; Li, Xiao-Song; Zhu, Ai-Min; Jang, Ben W-L
2012-11-15
At room temperature, the enhanced effect of water vapor on ozone catalytic oxidation (OZCO) of formaldehyde to CO2 over MnOx catalysts and the reaction stability was reported. In a dry air stream, only below 20% of formaldehyde could be oxidized into CO2 by O3. In humid air streams (RH≥55%), ∼100% of formaldehyde were oxidized into CO2 by O3 and the reaction stability was significantly enhanced. Meanwhile, in situ Diffuse Reflectance Infrared Fourier Transform (DRIFT) spectra of OZCO of HCHO demonstrate that the amount of both monodentate and bidentate carbonate species on MnOx, in the dry stream, increased gradually with time on stream (TOS). However, in the humid stream, almost no accumulation of carbonate species on the catalysts was observed. To clarify the enhanced mechanism, formaldehyde surface reactions and CO2 adsorption/desorption on the fresh, O3 and O3+H2O treated MnOx catalysts were examined comparatively. Copyright © 2012 Elsevier B.V. All rights reserved.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
Richard E. Wehnes
1989-01-01
The quality of streams and stream habitat for aquatic life and terrestrial animals in the central hardwood forest can be maintained or enhanced through careful protection, management, and re-establishment of streamside forests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, Gerald G.
What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectramore » along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.« less
NASA Astrophysics Data System (ADS)
Geneva, Nicholas; Wang, Lian-Ping
2015-11-01
In the past 25 years, the mesoscopic lattice Boltzmann method (LBM) has become an increasingly popular approach to simulate incompressible flows including turbulent flows. While LBM solves more solution variables compared to the conventional CFD approach based on the macroscopic Navier-Stokes equation, it also offers opportunities for more efficient parallelization. In this talk we will describe several different algorithms that have been developed over the past 10 plus years, which can be used to represent the two core steps of LBM, collision and streaming, more effectively than standard approaches. The application of these algorithms spans LBM simulations ranging from basic channel to particle laden flows. We will cover the essential detail on the implementation of each algorithm for simple 2D flows, to the challenges one faces when using a given algorithm for more complex simulations. The key is to explore the best use of data structure and cache memory. Two basic data structures will be discussed and the importance of effective data storage to maximize a CPU's cache will be addressed. The performance of a 3D turbulent channel flow simulation using these different algorithms and data structures will be compared along with important hardware related issues.
On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach
NASA Astrophysics Data System (ADS)
Liu, Zheng; Xue, Kaiping; Hong, Peilin
The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.
New Algorithms and Lower Bounds for Sequential-Access Data Compression
NASA Astrophysics Data System (ADS)
Gagie, Travis
2009-02-01
This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.
The response of macroinvertebrates to artificially enhanced detritus levels in plantation streams
NASA Astrophysics Data System (ADS)
Pretty, J. L.; Dobson, M.
The leaves and wood from vegetation surrounding headwater streams constitute a major food source for aquatic invertebrates, providing they are retained upon the streambed and not transported downstream. This study investigated the response of aquatic invertebrates to artificially increased detritus retention, in an effort to reproduce the naturally occurring build up of dead organic matter associated with streams in old-growth forest. The background detrital standing stock in streams in Kielder Forest (Northumberland, UK) was low, approximately 32 gm-2. Two streams flowing through dense conifer plantation and one in open broadleaved woodland were manipulated by the addition of logs over a 10 m stream reach. After several months, log addition significantly enhanced detrital standing stocks in both conifer and broadleaved streams. Total invertebrate abundance, taxon richness and the numbers of certain numerically dominant families were significantly higher in experimental than reference reaches in both conifer and broadleaved streams. This response was most marked for detritivores, whilst non-detritivore groups often showed no response to the manipulation. Whilst in the short term the responses to enhanced retention may reflect a redistribution of the local fauna, it is argued that over a longer time-scale, a genuine increase in invertebrate density and diversity could occur. Allowing old-growth forest to develop in planted valley bottoms may be a viable management option for conservation. If established alongside streams, it would ensure continuous input of woody material and the fauna may benefit from the resulting increase in detritus retention.
Machine Learning-based Transient Brokers for Real-time Classification of the LSST Alert Stream
NASA Astrophysics Data System (ADS)
Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika; ANTARES Collaboration
2018-01-01
The number of transient events discovered by wide-field time-domain surveys already far outstrips the combined followup resources of the astronomical community. This number will only increase as we progress towards the commissioning of the Large Synoptic Survey Telescope (LSST), breaking the community's current followup paradigm. Transient brokers - software to sift through, characterize, annotate and prioritize events for followup - will be a critical tool for managing alert streams in the LSST era. Developing the algorithms that underlie the brokers, and obtaining simulated LSST-like datasets prior to LSST commissioning, to train and test these algorithms are formidable, though not insurmountable challenges. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint project of the National Optical Astronomy Observatory and the Department of Computer Science at the University of Arizona. We have been developing completely automated methods to characterize and classify variable and transient events from their multiband optical photometry. We describe the hierarchical ensemble machine learning algorithm we are developing, and test its performance on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, as well as our progress towards incorporating these into a real-time event broker working on live alert streams from time-domain surveys.
Formal development of a clock synchronization circuit
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
This talk presents the latest stage in formal development of a fault-tolerant clock synchronization circuit. The development spans from a high level specification of the required properties to a circuit realizing the core function of the system. An abstract description of an algorithm has been verified to satisfy the high-level properties using the mechanical verification system EHDM. This abstract description is recast as a behavioral specification input to the Digital Design Derivation system (DDD) developed at Indiana University. DDD provides a formal design algebra for developing correct digital hardware. Using DDD as the principle design environment, a core circuit implementing the clock synchronization algorithm was developed. The design process consisted of standard DDD transformations augmented with an ad hoc refinement justified using the Prototype Verification System (PVS) from SRI International. Subsequent to the above development, Wilfredo Torres-Pomales discovered an area-efficient realization of the same function. Establishing correctness of this optimization requires reasoning in arithmetic, so a general verification is outside the domain of both DDD transformations and model-checking techniques. DDD represents digital hardware by systems of mutually recursive stream equations. A collection of PVS theories was developed to aid in reasoning about DDD-style streams. These theories include a combinator for defining streams that satisfy stream equations, and a means for proving stream equivalence by exhibiting a stream bisimulation. DDD was used to isolate the sub-system involved in Torres-Pomales' optimization. The equivalence between the original design and the optimized verified was verified in PVS by exhibiting a suitable bisimulation. The verification depended upon type constraints on the input streams and made extensive use of the PVS type system. The dependent types in PVS provided a useful mechanism for defining an appropriate bisimulation.
Sanders, Michael J.; Markstrom, Steven L.; Regan, R. Steven; Atkinson, R. Dwight
2017-09-15
A module for simulation of daily mean water temperature in a network of stream segments has been developed as an enhancement to the U.S. Geological Survey Precipitation Runoff Modeling System (PRMS). This new module is based on the U.S. Fish and Wildlife Service Stream Network Temperature model, a mechanistic, one-dimensional heat transport model. The new module is integrated in PRMS. Stream-water temperature simulation is activated by selection of the appropriate input flags in the PRMS Control File and by providing the necessary additional inputs in standard PRMS input files.This report includes a comprehensive discussion of the methods relevant to the stream temperature calculations and detailed instructions for model input preparation.
A Review on Data Stream Classification
NASA Astrophysics Data System (ADS)
Haneen, A. A.; Noraziah, A.; Wahab, Mohd Helmy Abd
2018-05-01
At this present time, the significance of data streams cannot be denied as many researchers have placed their focus on the research areas of databases, statistics, and computer science. In fact, data streams refer to some data points sequences that are found in order with the potential to be non-binding, which is generated from the process of generating information in a manner that is not stationary. As such the typical tasks of searching data have been linked to streams of data that are inclusive of clustering, classification, and repeated mining of pattern. This paper presents several data stream clustering approaches, which are based on density, besides attempting to comprehend the function of the related algorithms; both semi-supervised and active learning, along with reviews of a number of recent studies.
Category-Specific Comparison of Univariate Alerting Methods for Biosurveillance Decision Support
Elbert, Yevgeniy; Hung, Vivian; Burkom, Howard
2013-01-01
Objective For a multi-source decision support application, we sought to match univariate alerting algorithms to surveillance data types to optimize detection performance. Introduction Temporal alerting algorithms commonly used in syndromic surveillance systems are often adjusted for data features such as cyclic behavior but are subject to overfitting or misspecification errors when applied indiscriminately. In a project for the Armed Forces Health Surveillance Center to enable multivariate decision support, we obtained 4.5 years of out-patient, prescription and laboratory test records from all US military treatment facilities. A proof-of-concept project phase produced 16 events with multiple evidence corroboration for comparison of alerting algorithms for detection performance. We used the representative streams from each data source to compare sensitivity of 6 algorithms to injected spikes, and we used all data streams from 16 known events to compare them for detection timeliness. Methods The six methods compared were: Holt-Winters generalized exponential smoothing method (1)automated choice between daily methods, regression and an exponential weighted moving average (2)adaptive daily Shewhart-type chartadaptive one-sided daily CUSUMEWMA applied to 7-day means with a trend correction; and7-day temporal scan statistic Sensitivity testing: We conducted comparative sensitivity testing for categories of time series with similar scales and seasonal behavior. We added multiples of the standard deviation of each time series as single-day injects in separate algorithm runs. For each candidate method, we then used as a sensitivity measure the proportion of these runs for which the output of each algorithm was below alerting thresholds estimated empirically for each algorithm using simulated data streams. We identified the algorithm(s) whose sensitivity was most consistently high for each data category. For each syndromic query applied to each data source (outpatient, lab test orders, and prescriptions), 502 authentic time series were derived, one for each reporting treatment facility. Data categories were selected in order to group time series with similar expected algorithm performance: Median > 100 < Median ≤ 10Median = 0Lag 7 Autocorrelation Coefficient ≥ 0.2Lag 7 Autocorrelation Coefficient < 0.2 Timeliness testing: For the timeliness testing, we avoided artificiality of simulated signals by measuring alerting detection delays in the 16 corroborated outbreaks. The multiple time series from these events gave a total of 141 time series with outbreak intervals for timeliness testing. The following measures were computed to quantify timeliness of detection: Median Detection Delay – median number of days to detect the outbreak.Penalized Mean Detection Delay –mean number of days to detect the outbreak with outbreak misses penalized as 1 day plus the maximum detection time. Results Based on the injection results, the Holt-Winters algorithm was most sensitive among time series with positive medians. The adaptive CUSUM and the Shewhart methods were most sensitive for data streams with median zero. Table 1 provides timeliness results using the 141 outbreak-associated streams on sparse (Median=0) and non-sparse data categories. [Insert table #1 here] Data median Detection Delay, days Holt-winters Regression EWMA Adaptive Shewhart Adaptive CUSUM 7-day Trend-adj. EWMA 7-day Temporal Scan Median 0 Median 3 2 4 2 4.5 2 Penalized Mean 7.2 7 6.6 6.2 7.3 7.6 Median >0 Median 2 2 2.5 2 6 4 Penalized Mean 6.1 7 7.2 7.1 7.7 6.6 The gray shading in the table 1 indicates methods with shortest detection delays for sparse and non-sparse data streams. The Holt-Winters method was again superior for non-sparse data. For data with median=0, the adaptive CUSUM was superior for a daily false alarm probability of 0.01, but the Shewhart method was timelier for more liberal thresholds. Conclusions Both kinds of detection performance analysis showed the method based on Holt-Winters exponential smoothing superior on non-sparse time series with day-of-week effects. The adaptive CUSUM and She-whart methods proved optimal on sparse data and data without weekly patterns.
Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection
NASA Astrophysics Data System (ADS)
Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.
2006-12-01
We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.
LHCb trigger streams optimization
NASA Astrophysics Data System (ADS)
Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.
2017-10-01
The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.
Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, G,; Houston, M.; Ng, Y.-R.
2002-01-11
We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications thatmore » use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.« less
John Day River Subbasin Fish Habitat Enhancement Project, 2002 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Russ M.; Jerome, James P.; Delano, Kenneth H.
2003-03-01
Work undertaken in 2002 included: (1) Seven new fence projects were completed thereby protecting 6.0 miles of stream (2) Completion of 0.7 miles of dredge tail leveling on Granite Creek. (3) New fence construction (300ft) plus one watergap on Indian Creek/ Kuhl property. (4) Maintenance of all active project fences (58.76 miles), watergaps (56), spring developments (32) and plantings were checked and repairs performed. (5) Restoration and Enhancement projects protected 3 miles of stream within the basin. (6) Since the initiation of the Fish Habitat Project in 1984 we have 67.21 miles of stream protected using 124.2 miles of fence.more » With the addition of the Restoration and Enhancement Projects we have 199.06 miles of fence protecting 124.57 miles of stream.« less
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
COMPUTER PROGRAM DOCUMENTATION FOR THE ENHANCED STREAM WATER QUALITY MODEL QUAL2E
Presented in the manual are recent modifications and improvements to the widely used stream water quality model QUAL-II. Called QUAL2E, the enhanced model incorporates improvements in eight areas: (1) algal, nitrogen, phosphorus, and dissolved oxygen interactions; (2) algal growt...
Novel medical image enhancement algorithms
NASA Astrophysics Data System (ADS)
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
Real Time Coincidence Processing Algorithm for Geiger Mode LADAR using FPGAs
2017-01-09
Defense for Research and Engineering. Real Time Coincidence Processing Algorithm for Geiger-Mode Ladar using FPGAs Rufo A. Antonio1, Alexandru N...the first ever Geiger-mode ladar processing al- gorithm that is suitable for implementation on an FPGA enabling real time pro- cessing and data...developed embedded FPGA real time processing algorithms that take noisy raw data, streaming at upwards of 1GB/sec, and filters the data to obtain a near- ly
Method for enhanced atomization of liquids
Thompson, Richard E.; White, Jerome R.
1993-01-01
In a process for atomizing a slurry or liquid process stream in which a slurry or liquid is passed through a nozzle to provide a primary atomized process stream, an improvement which comprises subjecting the liquid or slurry process stream to microwave energy as the liquid or slurry process stream exits the nozzle, wherein sufficient microwave heating is provided to flash vaporize the primary atomized process stream.
Comparative Analysis of Rank Aggregation Techniques for Metasearch Using Genetic Algorithm
ERIC Educational Resources Information Center
Kaur, Parneet; Singh, Manpreet; Singh Josan, Gurpreet
2017-01-01
Rank Aggregation techniques have found wide applications for metasearch along with other streams such as Sports, Voting System, Stock Markets, and Reduction in Spam. This paper presents the optimization of rank lists for web queries put by the user on different MetaSearch engines. A metaheuristic approach such as Genetic algorithm based rank…
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas
2012-01-01
Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.
NASA Astrophysics Data System (ADS)
Stough, T.; Green, D. S.
2017-12-01
This collaborative research to operations demonstration brings together the data and algorithms from NASA research, technology, and applications-funded projects to deliver relevant data streams, algorithms, predictive models, and visualization tools to the NOAA National Tsunami Warning Center (NTWC) and Pacific Tsunami Warning Center (PTWC). Using real-time GNSS data and models in an operational environment, we will test and evaluate an augmented capability for tsunami early warning. Each of three research groups collect data from a selected network of real-time GNSS stations, exchange data consisting of independently processed 1 Hz station displacements, and merge the output into a single, more accurate and reliable set. The resulting merged data stream is delivered from three redundant locations to the TWCs with a latency of 5-10 seconds. Data from a number of seismogeodetic stations with collocated GPS and accelerometer instruments are processed for displacements and seismic velocities and also delivered. Algorithms for locating and determining the magnitude of earthquakes as well as algorithms that compute the source function of a potential tsunami using this new data stream are included in the demonstration. The delivered data, algorithms, models and tools are hosted on NOAA-operated machines at both warning centers, and, once tested, the results will be evaluated for utility in improving the speed and accuracy of tsunami warnings. This collaboration has the potential to dramatically improve the speed and accuracy of the TWCs local tsunami information over the current seismometer-only based methods. In our first year of this work, we have established and deployed an architecture for data movement and algorithm installation at the TWC's. We are addressing data quality issues and porting algorithms into the TWCs operating environment. Our initial module deliveries will focus on estimating moment magnitude (Mw) from Peak Ground Displacement (PGD), within 2-3 minutes of the event, and coseismic displacements converging to static offsets. We will also develop visualizations of module outputs tailored to the operational environment. In the context of this work, we will also discuss this research to operations approach and other opportunities within the NASA Applied Science Disaster Program.
NASA Astrophysics Data System (ADS)
Rodriguez, Juan; Krista, Larisza
2017-04-01
Enhancements of relativistic electrons in Earth's radiation belts statistically exhibit a 27-day periodicity that is attributable to the interaction of corotating interaction regions (CIRs) with the Earth's magnetosphere. These CIRs are the interfaces between tenuous, high-speed solar wind streams (HSS) emitted by coronal holes (CH) and the denser, slower solar wind emitted from the quiet Sun (QS). At these stream interfaces (SI), the plasma is compressed, resulting in increased number density and magnetic field. Subsequent relativistic electron enhancements have been attributed to southward interplanetary magnetic field (IMF Bz). This includes southward Bz intensified within the CIR as well as southward Bz associated with Alfvenic turbulence in the following HSS. Although this chain of events is broadly accepted, few studies have studied in depth the evolution of a single persistent CH, its solar wind signatures at L1, and associated recurrent relativistic electron enhancements in the radiation belts. During the second half of 2003, a persistent CH was observed in the northern hemisphere of the Sun. The resulting CIR caused recurrent enhancements in the relativistic electron fluxes observed by the GOES satellites. During these enhancements, the >2 MeV electrons increased from dropout (instrument background) levels to hazardous levels more than an order-of-magnitude greater than the NOAA SWPC alert level. Moreover, for the first time in Solar Cycle 23 (SC23) the >4 MeV electron fluxes exceeded 100 electrons/(cm**2 s sr). This happened in five recurrent extended relativistic electron enhancement events during this period. In context, only five such events with >4 MeV electron fluxes exceeding 100 electrons/(cm**2 s sr) occurred during the rest of SC23, and not in a recurrent fashion. Using this as a geoeffectiveness criterion, neither other CHs during this period, nor the coronal mass ejections (CMEs) in later October and November were as geoeffective as this persistent CH. This paper addresses the question: how do the properties of this particularly geoeffective CH and its solar wind manifestations at 1 AU vary from rotation to rotation and how is it distinguished from less geoeffective CHs (and ICMEs) during the same period? The Coronal Hole Automated Recognition and Monitoring (CHARM; Krista and Gallagher, 2009) algorithm is used to identify CHs and to quantify their physical properties (e.g., boundary, area, magnetic field strength and polarity). The Minor Storm (MiSt) algorithm is used to link the CHs to their in situ signatures (e.g., IMF, velocity, number density, temperature) observed by the Advanced Composition Explorer (ACE) satellite. The properties of the CHs and associated geoeffective solar wind properties are evaluated and compared, as well as the Dst geomagnetic index. With these results, we determine whether any of the characteristics of the CHs and their in situ solar wind signatures distinguish them in their relative geoeffectiveness.
Station Keeping of Small Outboard-Powered Boats
NASA Technical Reports Server (NTRS)
Fisher, A. D.; VanZwieten, J. H., Jr.; VanZwieten, T. S.
2010-01-01
Three station keeping controllers have been developed which work to minimize displacement of a small outboard-powered vessel from a desired location. Each of these three controllers has a common initial layer that uses fixed-gain feedback control to calculate the desired heading of the vessel. A second control layer uses a common fixed-gain feedback controller to calculate the net forward thrust, one of two algorithms for controlling engine angle (Fixed-Gain Proportional-integral-derivative (PID) or PID with Adaptively Augmented Gains), and one of two algorithms for differential throttle control (Fixed-Gain PID and PID with Adaptive Differential Throttle gains), which work together to eliminate heading error. The three selected controllers are evaluated using a numerical simulation of a 33-foot center console vessel with twin outboards that is subject to wave, wind, and current disturbances. Each controller is tested for its ability to maintain position in the presence of three sets of environmental disturbances. These algorithms were tested with current velocity of 1.5 m/s, significant wave height of 0.5 m, and wind speeds of 2, 5, and 10 m/s. These values were chosen to model conditions a small vessel may experience in the Gulf Stream off of Fort Lauderdale. The Fixed-gain PID controller progressively got worse as wind speeds increased, while the controllers using adaptive methodologies showed consistent performance over all weather conditions and decreased heading error by as much as 20%. Thus, enhanced robustness to environmental changes has been gained by using an adaptive algorithm.
NASA Astrophysics Data System (ADS)
Budiman, M. A.; Rachmawati, D.; Parlindungan, M. R.
2018-03-01
MDTM is a classical symmetric cryptographic algorithm. As with other classical algorithms, the MDTM Cipher algorithm is easy to implement but it is less secure compared to modern symmetric algorithms. In order to make it more secure, a stream cipher RC4A is added and thus the cryptosystem becomes super encryption. In this process, plaintexts derived from PDFs are firstly encrypted with the MDTM Cipher algorithm and are encrypted once more with the RC4A algorithm. The test results show that the value of complexity is Θ(n2) and the running time is linearly directly proportional to the length of plaintext characters and the keys entered.
The dynamics of climate-induced deglacial ice stream acceleration
NASA Astrophysics Data System (ADS)
Robel, A.; Tziperman, E.
2015-12-01
Geological observations indicate that ice streams were a significant contributor to ice flow in the Laurentide Ice Sheet during the Last Glacial Maximum. Conceptual and simple model studies have also argued that the gradual development of ice streams increases the sensitivity of large ice sheets to weak climate forcing. In this study, we use an idealized configuration of the Parallel Ice Sheet Model to explore the role of ice streams in rapid deglaciation. In a growing ice sheet, ice streams develop gradually as the bed warms and the margin expands outward onto the continental shelf. Then, a weak change in equilibrium line altitude commensurate with Milankovitch forcing results in a rapid deglacial response, as ice stream acceleration leads to enhanced calving and surface melting at low elevations. We explain the dynamical mechanism that drives this ice stream acceleration and its broader applicability as a feedback for enhancing ice sheet decay in response to climate forcing. We show how our idealized ice sheet simulations match geomorphological observations of deglacial ice stream variability and previous model-data analyses. We conclude with observations on the potential for interaction between ice streams and other feedback mechanisms within the earth system.
Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS)
NASA Astrophysics Data System (ADS)
Daniels, M. D.; Graves, S. J.; Vernon, F.; Kerkez, B.; Chandra, C. V.; Keiser, K.; Martin, C.
2014-12-01
Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) Access, utilization and management of real-time data continue to be challenging for decision makers, as well as researchers in several scientific fields. This presentation will highlight infrastructure aimed at addressing some of the gaps in handling real-time data, particularly in increasing accessibility of these data to the scientific community through cloud services. The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) system addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Advances in the distribution of real-time data are leading many new transient phenomena in space-time to be observed, however real-time decision-making is infeasible in many cases that require streaming scientific data as these data are locked down and sent only to proprietary in-house tools or displays. This lack of accessibility to the broader scientific community prohibits algorithm development and workflows initiated by these data streams. As part of NSF's EarthCube initiative, CHORDS proposes to make real-time data available to the academic community via cloud services. The CHORDS infrastructure will enhance the role of real-time data within the geosciences, specifically expanding the potential of streaming data sources in enabling adaptive experimentation and real-time hypothesis testing. Adherence to community data and metadata standards will promote the integration of CHORDS real-time data with existing standards-compliant analysis, visualization and modeling tools.
Image contrast enhancement using adjacent-blocks-based modification for local histogram equalization
NASA Astrophysics Data System (ADS)
Wang, Yang; Pan, Zhibin
2017-11-01
Infrared images usually have some non-ideal characteristics such as weak target-to-background contrast and strong noise. Because of these characteristics, it is necessary to apply the contrast enhancement algorithm to improve the visual quality of infrared images. Histogram equalization (HE) algorithm is a widely used contrast enhancement algorithm due to its effectiveness and simple implementation. But a drawback of HE algorithm is that the local contrast of an image cannot be equally enhanced. Local histogram equalization algorithms are proved to be the effective techniques for local image contrast enhancement. However, over-enhancement of noise and artifacts can be easily found in the local histogram equalization enhanced images. In this paper, a new contrast enhancement technique based on local histogram equalization algorithm is proposed to overcome the drawbacks mentioned above. The input images are segmented into three kinds of overlapped sub-blocks using the gradients of them. To overcome the over-enhancement effect, the histograms of these sub-blocks are then modified by adjacent sub-blocks. We pay more attention to improve the contrast of detail information while the brightness of the flat region in these sub-blocks is well preserved. It will be shown that the proposed algorithm outperforms other related algorithms by enhancing the local contrast without introducing over-enhancement effects and additional noise.
NASA Astrophysics Data System (ADS)
Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.
2006-05-01
While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.
Methods of producing alkylated hydrocarbons from an in situ heat treatment process liquid
Roes, Augustinus Wilhelmus Maria [Houston, TX; Mo, Weijian [Sugar Land, TX; Muylle, Michel Serge Marie [Houston, TX; Mandema, Remco Hugo [Houston, TX; Nair, Vijay [Katy, TX
2009-09-01
A method for producing alkylated hydrocarbons is disclosed. Formation fluid is produced from a subsurface in situ heat treatment process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes olefins. The liquid stream is fractionated to produce at least a second gas stream including hydrocarbons having a carbon number of at least 3. The first gas stream and the second gas stream are introduced into an alkylation unit to produce alkylated hydrocarbons. At least a portion of the olefins in the first gas stream enhance alkylation.
West Antarctic Balance Fluxes: Impact of Smoothing, Algorithm and Topography.
NASA Astrophysics Data System (ADS)
Le Brocq, A.; Payne, A. J.; Siegert, M. J.; Bamber, J. L.
2004-12-01
Grid-based calculations of balance flux and velocity have been widely used to understand the large-scale dynamics of ice masses and as indicators of their state of balance. This research investigates a number of issues relating to their calculation for the West Antarctic Ice Sheet (see below for further details): 1) different topography smoothing techniques; 2) different grid based flow-apportioning algorithms; 3) the source of the flow direction, whether from smoothed topography, or smoothed gravitational driving stress; 4) different flux routing techniques and 5) the impact of different topographic datasets. The different algorithms described below lead to significant differences in both ice stream margins and values of fluxes within them. This encourages caution in the use of grid-based balance flux/velocity distributions and values, especially when considering the state of balance of individual ice streams. 1) Most previous calculations have used the same numerical scheme (Budd and Warner, 1996) applied to a smoothed topography in order to incorporate the longitudinal stresses that smooth ice flow. There are two options to consider when smoothing the topography, the size of the averaging filter and the shape of the averaging function. However, this is not a physically-based approach to incorporating smoothed ice flow and also introduces significant flow artefacts when using a variable weighting function. 2) Different algorithms to apportion flow are investigated; using 4 or 8 neighbours, and apportioning flow to all down-slope cells or only 2 (based on derived flow direction). 3) A theoretically more acceptable approach of incorporating smoothed ice flow is to use the smoothed gravitational driving stress in x and y components to derive a flow direction. The flux can then be apportioned using the flow direction approach used above. 4) The original scheme (Budd and Warner, 1996) uses an elevation sort technique to calculate the balance flux contribution from all cells to each individual cell. However, elevation sort is only successful when ice cannot flow uphill. Other possible techniques include using a recursive call for each neighbour or using a sparse matrix solution. 5) Two digital elevation models are used as input data, which have significant differences in coastal and mountainous areas and therefore lead to different calculations. Of particular interest is the difference in the Rutford Ice Stream/Carlson Inlet and Kamb Ice Stream (Ice Stream C) fluxes.
Influence of wood on invertebrate communities in streams and rivers
Arthur Benke; J. Bruce Wallace
2010-01-01
Wood plays a major role in creating multiple invertebrate habitats in small streams and large rivers. In small streams, wood debris dams are instrumental in creating a step and pool profile of habitats, enhancing habitat heterogeneity, retaining organic matter, and changing current velocity. Beavers can convert sections of free-flowing streams into ponds and wetlands...
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Interpolation algorithm for asynchronous ADC-data
NASA Astrophysics Data System (ADS)
Bramburger, Stefan; Zinke, Benny; Killat, Dirk
2017-09-01
This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT) algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.
Fast Fourier Transform algorithm design and tradeoffs
NASA Technical Reports Server (NTRS)
Kamin, Ray A., III; Adams, George B., III
1988-01-01
The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.
A New Numerical Scheme for Cosmic-Ray Transport
NASA Astrophysics Data System (ADS)
Jiang, Yan-Fei; Oh, S. Peng
2018-02-01
Numerical solutions of the cosmic-ray (CR) magnetohydrodynamic equations are dogged by a powerful numerical instability, which arises from the constraint that CRs can only stream down their gradient. The standard cure is to regularize by adding artificial diffusion. Besides introducing ad hoc smoothing, this has a significant negative impact on either computational cost or complexity and parallel scalings. We describe a new numerical algorithm for CR transport, with close parallels to two-moment methods for radiative transfer under the reduced speed of light approximation. It stably and robustly handles CR streaming without any artificial diffusion. It allows for both isotropic and field-aligned CR streaming and diffusion, with arbitrary streaming and diffusion coefficients. CR transport is handled explicitly, while source terms are handled implicitly. The overall time step scales linearly with resolution (even when computing CR diffusion) and has a perfect parallel scaling. It is given by the standard Courant condition with respect to a constant maximum velocity over the entire simulation domain. The computational cost is comparable to that of solving the ideal MHD equation. We demonstrate the accuracy and stability of this new scheme with a wide variety of tests, including anisotropic streaming and diffusion tests, CR-modified shocks, CR-driven blast waves, and CR transport in multiphase media. The new algorithm opens doors to much more ambitious and hitherto intractable calculations of CR physics in galaxies and galaxy clusters. It can also be applied to other physical processes with similar mathematical structure, such as saturated, anisotropic heat conduction.
Hydrodynamic enhanced dielectrophoretic particle trapping
Miles, Robin R.
2003-12-09
Hydrodynamic enhanced dielectrophoretic particle trapping carried out by introducing a side stream into the main stream to squeeze the fluid containing particles close to the electrodes producing the dielelectrophoretic forces. The region of most effective or the strongest forces in the manipulating fields of the electrodes producing the dielectrophoretic forces is close to the electrodes, within 100 .mu.m from the electrodes. The particle trapping arrangement uses a series of electrodes with an AC field placed between pairs of electrodes, which causes trapping of particles along the edges of the electrodes. By forcing an incoming flow stream containing cells and DNA, for example, close to the electrodes using another flow stream improves the efficiency of the DNA trapping.
Umatilla River Basin Anadromous Fsh Habitat Enhancement Project : 2000 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, R. Todd
2001-12-31
The Umatilla River Basin Anadromous Fish Habitat Enhancement Project continued to identify impacted stream reaches throughout the Umatilla River Basin for habitat improvements during the 2000 project period. Public outreach efforts, biological and physical monitoring, and continued development of a Umatilla River Basin Watershed Assessment assisted the project in fostering public cooperation, targeting habitat deficiencies and determining habitat recovery measures. Habitat enhancement projects continued to be maintained on 44 private properties, four riparian easements and one in-stream enhancement agreement were secured, two new projects implemented and two existing projects improved to enhance anadromous fish habitat and natural fisheries production capabilitiesmore » in the Umatilla River Basin. New project locations included sites on the mid Umatilla River and Buckaroo Creek. Improvements were implemented at existing project sites on the upper Umatilla River and Wildhorse Creek. A stream bank stabilization project was implemented at approximately River Mile 37.4 Umatilla River to stabilize 760 feet of eroding stream bank and improve in-stream habitat diversity. Habitat enhancements at this site included construction of six rock barbs with one large conifer root wad incorporated into each barb, stinging approximately 10,000 native willow cuttings, planting 195 tubling willows and 1,800 basin wildrye grass plugs, and seeding 40 pounds of native grass seed. Staff time to assist in development of a subcontract and fence materials were provided to establish eight spring sites for off-stream watering and to protect wetlands within the Buckaroo Creek Watershed. A gravel bar was moved and incorporated into an adjacent point bar to reduce stream energy and stream channel confinement within the existing project area at River Mile 85 Umatilla River. Approximately 10,000 native willow cuttings were stung and trenched into the stream channel margins and stream banks, and 360 basin wildrye grass plugs planted and 190 pounds of native grass seed broadcast on terraces between River Mile 10 and 12.5 within the existing Wildhorse Creek Project Area. Approximately 70 pounds of native grasses were seeded in the existing McKay Creek Project Area at approximately River Mile 21.5. Financial and in-kind cost share assistance was provided by the Confederated Tribes of the Umatilla Indian Reservation, U.S. Bureau of Indian Affairs, U.S. Department of Agriculture, U.S. Fish and Wildlife Service, National Fish and Wildlife Federation and the Umatilla National Forest for the enhancements at River Mile 37.4 Umatilla River and within the Buckaroo Creek Watershed. Monitoring continued to quantify effects of habitat enhancements in the upper basin. Maximum, minimum and average daily stream temperatures were collected from June through September at 22 sites. Suspended sediment samples were obtained at three gage stations to arrive at daily sediment load estimates. Photographs were taken at 94 existing and two newly established photo points to document habitat recovery. Umatilla Basin Watershed Assessment efforts were continued under a subcontract with Washington State University. This endeavor involves compiling existing information, identifying data gaps, determining habitat-limiting factors and recommending actions to improve anadromous fisheries habitat. This watershed assessment document and working databases will be completed in fiscal year 2002 and made available to assist project personnel with sub-watershed prioritization of habitat needs.« less
NASA Astrophysics Data System (ADS)
Zhu, Yuxiang; Jiang, Jianmin; Huang, Changxing; Chen, Yongqin David; Zhang, Qiang
2018-04-01
This article, as part I, introduces three algorithms and applies them to both series of the monthly stream flow and rainfall in Xijiang River, southern China. The three algorithms include (1) normalization of probability distribution, (2) scanning U test for change points in correlation between two time series, and (3) scanning F-test for change points in variances. The normalization algorithm adopts the quantile method to normalize data from a non-normal into the normal probability distribution. The scanning U test and F-test have three common features: grafting the classical statistics onto the wavelet algorithm, adding corrections for independence into each statistic criteria at given confidence respectively, and being almost objective and automatic detection on multiscale time scales. In addition, the coherency analyses between two series are also carried out for changes in variance. The application results show that the changes of the monthly discharge are still controlled by natural precipitation variations in Xijiang's fluvial system. Human activities disturbed the ecological balance perhaps in certain content and in shorter spells but did not violate the natural relationships of correlation and variance changes so far.
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng
2013-03-01
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams
NASA Astrophysics Data System (ADS)
Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.
2012-02-01
Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.
Bayesian Modeling of the Assimilative Capacity Component of Stream Nutrient Export
Implementing stream restoration techniques and best management practices to reduce nonpoint source nutrients implies enhancement of the assimilative capacity for the stream system. In this paper, a Bayesian method for evaluating this component of a TMDL load capacity is developed...
Multi-channel distributed coordinated function over single radio in wireless sensor networks.
Campbell, Carlene E-A; Loo, Kok-Keong Jonathan; Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band.
Multi-Channel Distributed Coordinated Function over Single Radio in Wireless Sensor Networks
Campbell, Carlene E.-A.; Loo, Kok-Keong (Jonathan); Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band. PMID:22346614
M.D. Bryant; B.E. Wright; B.J. Davies
1992-01-01
A hierarchical classification system separating stream habitat into habitat units defined by stream morphology and hydrology was used in a pre-enhancement stream survey. The system separates habitat units into macrounits, mesounits, and micro- units and includes a separate evaluation of instream cover that also uses the hierarchical scheme. This paper presents an...
Turunen, Jarno; Louhi, Pauliina; Mykrä, Heikki; Aroviita, Jukka; Putkonen, Emmi; Huusko, Ari; Muotka, Timo
2018-06-06
The effects of anthropogenic stressors on community structure and ecosystem functioning can be strongly influenced by local habitat structure and dispersal from source communities. Catchment land uses increase the input of fine sediments into stream channels, clogging the interstitial spaces of benthic habitats. Aquatic macrophytes enhance habitat heterogeneity and mediate important ecosystem functions, being thus a key component of habitat structure in many streams. Therefore, the recovery of macrophytes following in-stream habitat modification may be prerequisite for successful stream restoration. Restoration success is also affected by dispersal of organisms from the source community, with potentially strongest responses in relatively isolated headwater sites that receive limited amount of dispersing individuals. We used a factorial design in a set of stream mesocosms to study the independent and combined effects of an anthropogenic stressor (sand sedimentation), local habitat (macrophytes, i.e. moss transplants) and enhanced dispersal (two levels: high vs. low) on organic matter retention, algal accrual rate, leaf decomposition and macroinvertebrate community structure. Overall, all responses were simple additive effects with no interactions between treatments. Sand reduced algal accumulation, total invertebrate density and density of a few individual taxa. Mosses reduced algal accrual rate and algae-grazing invertebrates, but enhanced organic matter retention and detritus- and filter-feeders. Mosses also reduced macroinvertebrate diversity by increasing the dominance by a few taxa. Mosses also reduced leaf-mass loss, possibly because the organic matter retained by mosses provided an additional food source for leaf-shredding invertebrates and thus reduced shredder aggregation into leaf packs. The effect of mosses on macroinvertebrate communities and ecosystem functioning was distinct irrespective of the level of dispersal, suggesting strong environmental control of community structure. The strong environmental control of macroinvertebrate community composition even under enhanced dispersal suggests that re-establishing key habitat features, such as natural stream vegetation, could aid ecosystem recovery in boreal streams. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation
NASA Astrophysics Data System (ADS)
Guo, Shuhang; Wang, Jian; Wang, Tong
2017-09-01
Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.
Methods of making transportation fuel
Roes, Augustinus Wilhelmus Maria [Houston, TX; Mo, Weijian [Sugar Land, TX; Muylle, Michel Serge Marie [Houston, TX; Mandema, Remco Hugo [Houston, TX; Nair, Vijay [Katy, TX
2012-04-10
A method for producing alkylated hydrocarbons is disclosed. Formation fluid is produced from a subsurface in situ heat treatment process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes olefins. The liquid stream is fractionated to produce at least a second gas stream including hydrocarbons having a carbon number of at least 3. The first gas stream and the second gas stream are introduced into an alkylation unit to produce alkylated hydrocarbons. At least a portion of the olefins in the first gas stream enhance alkylation. The alkylated hydrocarbons may be blended with one or more components to produce transportation fuel.
Recursive Fact-finding: A Streaming Approach to Truth Estimation in Crowdsourcing Applications
2013-07-01
are reported over the course of the campaign, lending themselves better to the abstraction of a data stream arriving from the community of sources. In...EM Recursive EM Figure 4. Recursive EM Algorithm Convergence V. RELATED WORK Social sensing which is also referred to as human- centric sensing [4...systems, where different sources offer reviews on products (or brands, companies) they have experienced [16]. Customers are affected by those reviews
An Adaptive Fuzzy-Logic Traffic Control System in Conditions of Saturated Transport Stream
Marakhimov, A. R.; Igamberdiev, H. Z.; Umarov, Sh. X.
2016-01-01
This paper considers the problem of building adaptive fuzzy-logic traffic control systems (AFLTCS) to deal with information fuzziness and uncertainty in case of heavy traffic streams. Methods of formal description of traffic control on the crossroads based on fuzzy sets and fuzzy logic are proposed. This paper also provides efficient algorithms for implementing AFLTCS and develops the appropriate simulation models to test the efficiency of suggested approach. PMID:27517081
Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
An interactive computer program for automatically generating traffic models for the Space Transportation System (STS) is presented. Information concerning run stream construction, input data, and output data is provided. The flow of the interactive data stream is described. Error messages are specified, along with suggestions for remedial action. In addition, formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.
Streaming fragment assignment for real-time analysis of sequencing experiments
Roberts, Adam; Pachter, Lior
2013-01-01
We present eXpress, a software package for highly efficient probabilistic assignment of ambiguously mapping sequenced fragments. eXpress uses a streaming algorithm with linear run time and constant memory use. It can determine abundances of sequenced molecules in real time, and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data. We demonstrate its use on RNA-seq data, showing greater efficiency than other quantification methods. PMID:23160280
Linear growth rates of resistive tearing modes with sub-Alfvénic streaming flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, L. N.; College of Sciences, China Jiliang University, Hangzhou 310018; Ma, Z. W., E-mail: zwma@zju.edu.cn
2014-07-15
The tearing instability with sub-Alfvénic streaming flow along the external magnetic field is investigated using resistive MHD simulation. It is found that the growth rate of the tearing mode instability is larger than that without the streaming flow. With the streaming flow, there exist two Alfvén resonance layers near the central current sheet. The larger perturbation of the magnetic field in two closer Alfvén resonance layers could lead to formation of the observed cone structure and can largely enhance the development of the tearing mode for a narrower streaming flow. For a broader streaming flow, a larger separation of Alfvénmore » resonance layers reduces the magnetic reconnection. The linear growth rate decreases with increase of the streaming flow thickness. The growth rate of the tearing instability also depends on the plasma beta (β). When the streaming flow is embedded in the current sheet, the growth rate increases with β if β < β{sub s}, but decreases if β > β{sub s}. The existence of the specific value β{sub s} can be attributed to competition between the suppressing effect of β and the enhancing effect of the streaming flow on the magnetic reconnection. The critical value β{sub s} increases with increase of the streaming flow strength.« less
Goeller, Brandon C; Febria, Catherine M; Harding, Jon S; McIntosh, Angus R
2016-05-01
Around the world, artificially drained agricultural lands are significant sources of reactive nitrogen to stream ecosystems, creating substantial stream health problems. One management strategy is the deployment of denitrification enhancement tools. Here, we evaluate the factors affecting the potential of denitrifying bioreactors to improve stream health and ecosystem services. The performance of bioreactors and the structure and functioning of stream biotic communities are linked by environmental parameters like dissolved oxygen and nitrate-nitrogen concentrations, dissolved organic carbon availability, flow and temperature regimes, and fine sediment accumulations. However, evidence of bioreactors' ability to improve waterway health and ecosystem services is lacking. To improve the potential of bioreactors to enhance desirable stream ecosystem functioning, future assessments of field-scale bioreactors should evaluate the influences of bioreactor performance on ecological indicators such as primary production, organic matter processing, stream metabolism, and invertebrate and fish assemblage structure and function. These stream health impact assessments should be conducted at ecologically relevant spatial and temporal scales. Bioreactors have great potential to make significant contributions to improving water quality, stream health, and ecosystem services if they are tailored to site-specific conditions and implemented strategically with land-based and stream-based mitigation tools within watersheds. This will involve combining economic, logistical, and ecological information in their implementation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
NASA Astrophysics Data System (ADS)
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements.
Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Physical device safety is typically implemented locally using embedded controllers, while operations safety is primarily performed in control centers. Safe operations can be enhanced by correct design of device-level control algorithms, and protocols, procedures and operator training at the control-room level, but all can fail. Moreover, these elements exchange data and issue commands via vulnerable communication layers. In order to secure these gaps and enhance operational safety, we believe monitoring of command sequences must be combined with an awareness of physical device limitations and automata models that capture safety mechanisms. One way of doing this is by leveraging specification-based intrusionmore » detection to monitor for physical constraint violations. The method can also verify that physical infrastructure state is consistent with monitoring information and control commands exchanged between field devices and control centers. This additional security layer enhances protection from both outsider attacks and insider mistakes. We implemented specification-based SCADA command analyzers using physical constraint algorithms directly in the Bro framework and Broccoli APIs for three separate scenarios: a water heater, an automated distribution system, and an over-current protection scheme. To accomplish this, we added low-level analyzers capable of examining control system-specific protocol packets for both Modbus TCP and DNP3, and also higher-level analyzers able to interpret device command and data streams within the context of each device's physical capabilities and present operational state. Thus the software that we are making available includes the Bro/Broccoli scripts for these three scenarios, as well as simulators, written in C, of those scenarios that generate sample traffic that is monitored by the Bro/Broccoli scripts. In addition, we have also implemented systems to directly pull cyber-physical information from the OSIsoft PI historian system. We have included the Python scripts used to perform that monitoring.« less
NASA Technical Reports Server (NTRS)
Grunes, Mitchell R.; Choi, Junho
1995-01-01
We are in the preliminary stages of creating an operational system for losslessly compressing packet data streams. The end goal is to reduce costs. Real world constraints include transmission in the presence of error, tradeoffs between the costs of compression and the costs of transmission and storage, and imperfect knowledge of the data streams to be transmitted. The overall method is to bring together packets of similar type, split the data into bit fields, and test a large number of compression algorithms. Preliminary results are very encouraging, typically offering compression factors substantially higher than those obtained with simpler generic byte stream compressors, such as Unix Compress and HA 0.98.
Streaming simplification of tetrahedral meshes.
Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T
2007-01-01
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.
Compact full-motion video hyperspectral cameras: development, image processing, and applications
NASA Astrophysics Data System (ADS)
Kanaev, A. V.
2015-10-01
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.
Online identification of wind model for improving quadcopter trajectory monitoring
NASA Astrophysics Data System (ADS)
Beniak, Ryszard; Gudzenko, Oleksandr
2017-10-01
In this paper, we consider a problem of quadcopter control in severe weather conditions. One type of such weather conditions is a strong variable wind. In this paper, we ponder deterministic and stochastic models of winds at low altitudes with the quadcopter performing aggressive maneuvers. We choose an adaptive algorithm as our control algorithm. This algorithm might seem suitable one to solve the given problem, as it is able to adjust quickly to changing conditions. However, as shown in the paper, this algorithm is not applicable to rapidly changing winds and requires additional filters to smooth the impulse streams, so as not to lose the stability of the object.
Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency
NASA Technical Reports Server (NTRS)
Gauntner, J. W.
1980-01-01
An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.
James D. Hall; Calvin O. Baker
1982-01-01
The literature and many published documents on rehabilitating and enhancing stream habitat for salmonid fishes are reviewed. The historical development and conceptual basis for habitat management are considered, followed by a review of successful and unsuccessful techniques for manipulation of spawning, rearing, and riparian habitat. Insufficient attention to...
RAZOR: A Compression and Classification Solution for the Internet of Things
Danieletto, Matteo; Bui, Nicola; Zorzi, Michele
2014-01-01
The Internet of Things is expected to increase the amount of data produced and exchanged in the network, due to the huge number of smart objects that will interact with one another. The related information management and transmission costs are increasing and becoming an almost unbearable burden, due to the unprecedented number of data sources and the intrinsic vastness and variety of the datasets. In this paper, we propose RAZOR, a novel lightweight algorithm for data compression and classification, which is expected to alleviate both aspects by leveraging the advantages offered by data mining methods for optimizing communications and by enhancing information transmission to simplify data classification. In particular, RAZOR leverages the concept of motifs, recurrent features used for signal categorization, in order to compress data streams: in such a way, it is possible to achieve compression levels of up to an order of magnitude, while maintaining the signal distortion within acceptable bounds and allowing for simple lightweight distributed classification. In addition, RAZOR is designed to keep the computational complexity low, in order to allow its implementation in the most constrained devices. The paper provides results about the algorithm configuration and a performance comparison against state-of-the-art signal processing techniques. PMID:24451454
NASA Astrophysics Data System (ADS)
Engelhardt, Sandy; Kolb, Silvio; De Simone, Raffaele; Karck, Matthias; Meinzer, Hans-Peter; Wolf, Ivo
2016-03-01
Mitral valve annuloplasty describes a surgical procedure where an artificial prosthesis is sutured onto the anatomical structure of the mitral annulus to re-establish the valve's functionality. Choosing an appropriate commercially available ring size and shape is a difficult decision the surgeon has to make intraoperatively according to his experience. In our augmented-reality framework, digitalized ring models are superimposed onto endoscopic image streams without using any additional hardware. To place the ring model on the proper position within the endoscopic image plane, a pose estimation is performed that depends on the localization of sutures placed by the surgeon around the leaflet origins and punctured through the stiffer structure of the annulus. In this work, the tissue penetration points are tracked by the real-time capable Lucas Kanade optical flow algorithm. The accuracy and robustness of this tracking algorithm is investigated with respect to the question whether outliers influence the subsequent pose estimation. Our results suggest that optical flow is very stable for a variety of different endoscopic scenes and tracking errors do not affect the position of the superimposed virtual objects in the scene, making this approach a viable candidate for annuloplasty augmented reality-enhanced decision support.
Fragility issues of medical video streaming over 802.11e-WLAN m-health environments.
Tan, Yow-Yiong Edwin; Philip, Nada; Istepanian, Robert H
2006-01-01
This paper presents some of the fragility issues of a medical video streaming over 802.11e-WLAN in m-health applications. In particular, we present a medical channel-adaptive fair allocation (MCAFA) scheme for enhanced QoS support for IEEE 802.11 (WLAN), as a modification for the standard 802.11e enhanced distributed coordination function (EDCF) is proposed for enhanced medical data performance. The medical channel-adaptive fair allocation (MCAFA) proposed extends the EDCF, by halving the contention window (CW) after zeta consecutive successful transmissions to reduce the collision probability when channel is busy. Simulation results show that MCAFA outperforms EDCF in-terms of overall performance relevant to the requirements of high throughput of medical data and video streaming traffic in 3G/WLAN wireless environments.
Engineering fluid flow using sequenced microstructures
NASA Astrophysics Data System (ADS)
Amini, Hamed; Sollier, Elodie; Masaeli, Mahdokht; Xie, Yu; Ganapathysubramanian, Baskar; Stone, Howard A.; di Carlo, Dino
2013-05-01
Controlling the shape of fluid streams is important across scales: from industrial processing to control of biomolecular interactions. Previous approaches to control fluid streams have focused mainly on creating chaotic flows to enhance mixing. Here we develop an approach to apply order using sequences of fluid transformations rather than enhancing chaos. We investigate the inertial flow deformations around a library of single cylindrical pillars within a microfluidic channel and assemble these net fluid transformations to engineer fluid streams. As these transformations provide a deterministic mapping of fluid elements from upstream to downstream of a pillar, we can sequentially arrange pillars to apply the associated nested maps and, therefore, create complex fluid structures without additional numerical simulation. To show the range of capabilities, we present sequences that sculpt the cross-sectional shape of a stream into complex geometries, move and split a fluid stream, perform solution exchange and achieve particle separation. A general strategy to engineer fluid streams into a broad class of defined configurations in which the complexity of the nonlinear equations of fluid motion are abstracted from the user is a first step to programming streams of any desired shape, which would be useful for biological, chemical and materials automation.
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
NASA Astrophysics Data System (ADS)
Peralta, Richard C.; Forghani, Ali; Fayad, Hala
2014-04-01
Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.
Online Cross-Validation-Based Ensemble Learning
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-01-01
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419
Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane
2016-08-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.
Temporally consistent segmentation of point clouds
NASA Astrophysics Data System (ADS)
Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas
2014-06-01
We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
Apparatus for rapid measurement of aerosol bulk chemical composition
Lee, Yin-Nan E.; Weber, Rodney J.; Orsini, Douglas
2006-04-18
An apparatus for continuous on-line measurement of chemical composition of aerosol particles with a fast time resolution is provided. The apparatus includes an enhanced particle size magnifier for producing activated aerosol particles and an enhanced collection device which collects the activated aerosol particles into a liquid stream for quantitative analysis by analytical means. Methods for on-line measurement of chemical composition of aerosol particles are also provided, the method including exposing aerosol carrying sample air to hot saturated steam thereby forming activated aerosol particles; collecting the activated aerosol particles by a collection device for delivery as a jet stream onto an impaction surface; and flushing off the activated aerosol particles from the impaction surface into a liquid stream for delivery of the collected liquid stream to an analytical instrument for quantitative measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Rui; Praggastis, Brenda L.; Smith, William P.
While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQLmore » is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.« less
Louhi, Pauliina; Mykrä, Heikki; Paavola, Riku; Huusko, Ari; Vehanen, Teppo; Mäki-Petäys, Aki; Muotka, Timo
2011-09-01
The primary focus of many in-stream restoration projects is to enhance habitat diversity for salmonid fishes, yet the lack of properly designed monitoring studies, particularly ones with pre-restoration data, limits any attempts to assess whether restoration has succeeded in improving salmonid habitat. Even less is known about the impacts of fisheries-related restoration on other, non-target biota. We examined how restoration aiming at the enhancement of juvenile brown trout (Salmo trutta L.) affects benthic macroinvertebrates, using two separate data sets: (1) a before-after-control-impact (BACI) design with three years before and three after restoration in differently restored and control reaches of six streams; and (2) a space-time substitution design including channelized, restored, and near-natural streams with an almost 20-year perspective on the recovery of invertebrate communities. In the BACI design, total macroinvertebrate density differed significantly from before to after restoration. Following restoration, densities decreased in all treatments, but less so in the controls than in restored sections. Taxonomic richness also decreased from before to after restoration, but this happened similarly in all treatments. In the long-term comparative study, macroinvertebrate species richness showed no difference between the channel types. Community composition differed significantly between the restored and natural streams, but not between restored and channelized streams. Overall, the in-stream restoration measures used increased stream habitat diversity but did not enhance benthic biodiversity. While many macroinvertebrates may be dispersal limited, our study sites should not have been too distant to reach within almost two decades. A key explanation for the weak responses by macroinvertebrate communities may have been historical. When Fennoscandian streams were channelized for log floating, the loss of habitat heterogeneity was only partial. Therefore, habitat may not have been limiting the macroinvertebrate communities to begin with. Stream restoration to support trout fisheries has strong public acceptance in Finland and will likely continue to increase in the near future. Therefore, more effort should be placed on assessing restoration success from a biodiversity perspective using multiple organism groups in both stream and riparian ecosystems.
An enhanced TIMESAT algorithm for estimating vegetation phenology metrics from MODIS data
Tan, B.; Morisette, J.T.; Wolfe, R.E.; Gao, F.; Ederer, G.A.; Nightingale, J.; Pedelty, J.A.
2011-01-01
An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates. ?? 2010 IEEE.
An Enhanced TIMESAT Algorithm for Estimating Vegetation Phenology Metrics from MODIS Data
NASA Technical Reports Server (NTRS)
Tan, Bin; Morisette, Jeffrey T.; Wolfe, Robert E.; Gao, Feng; Ederer, Gregory A.; Nightingale, Joanne; Pedelty, Jeffrey A.
2012-01-01
An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates.
Computing Cooling Flows in Turbines
NASA Technical Reports Server (NTRS)
Gauntner, J.
1986-01-01
Algorithm developed for calculating both quantity of compressor bleed flow required to cool turbine and resulting decrease in efficiency due to cooling air injected into gas stream. Program intended for use with axial-flow, air-breathing, jet-propulsion engines with variety of airfoil-cooling configurations. Algorithm results compared extremely well with figures given by major engine manufacturers for given bulk-metal temperatures and cooling configurations. Program written in FORTRAN IV for batch execution.
ERIC Educational Resources Information Center
Technology & Learning, 2008
2008-01-01
More than ever, teachers are using digital video to enhance their lessons. In fact, the number of schools using video streaming increased from 30 percent to 45 percent between 2004 and 2006, according to Market Data Retrieval. Why the popularity? For starters, video-streaming products are easy to use. They allow teachers to punctuate lessons with…
NASA Astrophysics Data System (ADS)
Martinez-Gutierrez, Genaro
Baja California Sur (Mexico), as well as mainland Mexico, is affected by tropical cyclone storms, which originate in the eastern north Pacific. Historical records show that Baja has been damaged by intense summer storms. An arid to semiarid climate characterizes the study area, where precipitation mainly occurs during the summer and winter seasons. Natural and anthropogenic changes have impacted the landscape of southern Baja. The present research documents the effects of tropical storms over the southern region of Baja California for a period of approximately twenty-six years. The goal of the research is to demonstrate how remote sensing can be used to detect the important effects of tropical storms including: (a) evaluation of change detection algorithms, and (b) delineating changes to the landscape including coastal modification, fluvial erosion and deposition, vegetation change, river avulsion using change detection algorithms. Digital image processing methods with temporal Landsat satellite remotely sensed data from the North America Landscape Characterization archive (NALC), Thematic Mapper (TM), and Enhanced Thematic Mapper (ETM) images were used to document the landscape change. Two image processing methods were tested including Image differencing (ID), and Principal Component Analysis (PCA). Landscape changes identified with the NALC archive and TM images showed that the major changes included a rapid change of land use in the towns of San Jose del Cabo and Cabo San Lucas between 1973 and 1986. The features detected using the algorithms included flood deposits within the channels of active streams, erosion banks, and new channels caused by channel avulsion. Despite the 19 year period covered by the NALC data and approximately 10 year intervals between acquisition dates, there were changed features that could be identified in the images. The TM images showed that flooding from Hurricane Isis (1998) produced new large deposits within the stream channels. This research has shown that remote sensing based change detection can delineate the effects of flooding on the landscape at scales down to the nominal resolution of the sensor. These findings indicate that many other applications for change detection are both viable and important. These include disaster response, flood hazard planning, geomorphic studies, water supply management in deserts.
Accelerating DNA analysis applications on GPU clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste
DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data which needs to be matched against exponentially growing databases known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also includemore » heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variabilities, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. Load balancing also plays a crucial role when considering the limited bandwidth among the nodes of these systems. In this paper we present an efficient implementation of the Aho-Corasick algorithm for high performance clusters accelerated with GPUs. We discuss how we partitioned and adapted the algorithm to fit the Tesla C1060 GPU and then present a MPI based implementation for a heterogeneous high performance cluster. We compare this implementation to MPI and MPI with pthreads based implementations for a homogeneous cluster of x86 processors, discussing the stability vs. the performance and the scaling of the solutions, taking into consideration aspects such as the bandwidth among the different nodes.« less
Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data, which needs to be matched against exponentially growing databases of known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems alsomore » include heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variability, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. In this paper, we discuss the implementation of the Aho-Corasick algorithm for GPU-accelerated high performance systems. We present an optimized implementation of Aho-Corasick for GPUs and discuss its tradeoffs on the Tesla T10 and he new Tesla T20 (codename Fermi) GPUs. We then integrate the optimized GPU code, respectively, in a MPI-based and in a pthreads-based load balancer to enable execution of the algorithm on clusters and large sharedmemory multiprocessors (SMPs) accelerated with multiple GPUs.« less
E3 Value Stream Mapping How-to Guide
Presentations and PDFs on value stream mapping for various sectors, which can reveal substantial opportunities to reduce costs, enhance production flow, save time, reduce inventory, and improve environmental performance.
Effects of turbulent hyporheic mixing on reach-scale solute transport
NASA Astrophysics Data System (ADS)
Roche, K. R.; Li, A.; Packman, A. I.
2017-12-01
Turbulence rapidly mixes solutes and fine particles into coarse-grained streambeds. Both hyporheic exchange rates and spatial variability of hyporheic mixing are known to be controlled by turbulence, but it is unclear how turbulent mixing influences mass transport at the scale of stream reaches. We used a process-based particle-tracking model to simulate local- and reach-scale solute transport for a coarse-bed stream. Two vertical mixing profiles, one with a smooth transition from in-stream to hyporheic transport conditions and a second with enhanced turbulent transport at the sediment-water interface, were fit to steady-state subsurface concentration profiles observed in laboratory experiments. The mixing profile with enhanced interfacial transport better matched the observed concentration profiles and overall mass retention in the streambed. The best-fit mixing profiles were then used to simulate upscaled solute transport in a stream. Enhanced mixing coupled in-stream and hyporheic solute transport, causing solutes exchanged into the shallow subsurface to have travel times similar to the water column. This extended the exponential region of the in-stream solute breakthrough curve, and delayed the onset of the heavy power-law tailing induced by deeper and slower hyporheic porewater velocities. Slopes of observed power-law tails were greater than those predicted from stochastic transport theory, and also changed in time. In addition, rapid hyporheic transport velocities truncated the hyporheic residence time distribution by causing mass to exit the stream reach via subsurface advection, yielding strong exponential tempering in the in-stream breakthrough curves at the timescale of advective hyporheic transport through the reach. These results show that strong turbulent mixing across the sediment-water interface violates the conventional separation of surface and subsurface flows used in current models for solute transport in rivers. Instead, the full distribution of flow and mixing over the surface-subsurface continuum must be explicitly considered to properly interpret solute transport in coarse-bed streams.
Streaming PCA with many missing entries.
DOT National Transportation Integrated Search
2015-12-01
This paper considers the problem of matrix completion when some number of the columns are : completely and arbitrarily corrupted, potentially by a malicious adversary. It is well-known that standard : algorithms for matrix completion can return arbit...
NASA Technical Reports Server (NTRS)
Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.
2016-01-01
A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.
Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means
NASA Astrophysics Data System (ADS)
Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.
2018-04-01
This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.
Innovative hyperchaotic encryption algorithm for compressed video
NASA Astrophysics Data System (ADS)
Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang
2002-12-01
It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.
Infrared image enhancement using H(infinity) bounds for surveillance applications.
Qidwai, Uvais
2008-08-01
In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.
NASA Technical Reports Server (NTRS)
Coles, W. A.; Harmon, J. K.; Lazarus, A. J.; Sullivan, J. D.
1978-01-01
Solar wind velocities measured by earth-orbiting spacecraft are compared with velocities determined from interplanetary scintillation (IPS) observations for 1973, a period when high-velocity streams were prevalent. The spacecraft and IPS velocities agree well in the mean and are highly correlated. No simple model for the distribution of enhanced turbulence within streams is sufficient to explain the velocity comparison results for the entire year. Although a simple proportionality between density fluctuation level and bulk density is consistent with IPS velocities for some periods, some streams appear to have enhanced turbulence in the high-velocity region, where the density is low.
Speech enhancement on smartphone voice recording
NASA Astrophysics Data System (ADS)
Tris Atmaja, Bagus; Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation.
4D Floodplain representation in hydrologic flood forecasting using WRFHydro modeling framework
NASA Astrophysics Data System (ADS)
Gangodagamage, C.; Li, Z.; Adams, T.; Ito, T.; Maitaria, K.; Islam, M.; Dhondia, J.
2015-12-01
Floods claim more lives and damage more property than any other category of natural disaster in the Continental U.S. A system that can demarcate local flood boundaries dynamically could help flood prone communities prepare for and even prevent from catastrophic flood events. Lateral distance from the centerline of the river to the right and left floodplains for the water levels coming out of the models at each grid location have not been properly integrated with the national hydrography dataset (NHDPlus). The NHDPlus dataset represents the stream network with feature classes such as rivers, tributaries, canals, lakes, ponds, dams, coastlines, and stream gages. The NHDPlus dataset consists of approximately 2.7 million river reaches defining how surface water drains to the ocean. These river reaches have upstream and downstream nodes and basic parameters such as flow direction, drainage area, reach slope etc. We modified an existing algorithm (Gangodagamage et al., 2007, 2011) to provide lateral distance from the centerline of the river to the right and left floodplains for the flows simulated by models. Previous work produced floodplain boundaries for static river stages (i.e. 3D metric: distance along the main stem, flow depth, lateral distance from river center line). Our new approach introduces the floodplain boundary for variable water levels with the fourth dimension, time. We use modeled flows from WRFHydro and demarcate the right and left lateral boundaries of inundation dynamically. This approach dynamically integrates with high resolution models (e.g., hourly and ~ 1 km spatial resolution) that are developed from recent advancements in high computational power with ground based measurements (e.g., Fluxnet), lateral inundation vectors (direction and spatial extent) derived from multi-temporal remote sensing data (e.g., LiDAR, WorldView 2, Landsat, ASTER, MODIS), and improved representations of the physical processes through multi-parameterizations. Our approach enhances the normalized (streams are at zero elevations) DEM derived upstream flow routing pathways for stream reaches for given water stages as more and more satellite data become available for various flood inundations. Validation of the inundation boundaries is performed using HEC-RAS hydrodynamic model results for selected streams.
NASA Astrophysics Data System (ADS)
Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang
2018-05-01
Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.
Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection
NASA Technical Reports Server (NTRS)
Srivastava, Askok N.; Matthews, Bryan; Das, Santanu
2008-01-01
The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.
Efficient traffic grooming in SONET/WDM BLSR Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, A S; Billah, A B; Wang, B
2004-04-02
In this paper, we study traffic grooming in SONET/WDM BLSR networks under the uniform all-to-all traffic model with an objective to reduce total network costs (wavelength and electronic multiplexing costs), in particular, to minimize the number of ADMs while using the optimal number of wavelengths. We derive a new tighter lower bound for the number of wavelengths when the number of nodes is a multiple of 4. We show that this lower bound is achievable. All previous ADM lower bounds except perhaps that in were derived under the assumption that the magnitude of the traffic streams (r) is one unitmore » (r = 1) with respect to the wavelength capacity granularity g. We then derive new, more general and tighter lower bounds for the number of ADMs subject to that the optimal number of wavelengths is used, and propose heuristic algorithms (circle construction algorithm and circle grooming algorithm) that try to minimize the number of ADMs while using the optimal number of wavelengths in BLSR networks. Both the bounds and algorithms are applicable to any value of r and for different wavelength granularity g. Performance evaluation shows that wherever applicable, our lower bounds are at least as good as existing bounds and are much tighter than existing ones in many cases. Our proposed heuristic grooming algorithms perform very well with traffic streams of larger magnitude. The resulting number of ADMs required is very close to the corresponding lower bounds derived in this paper.« less
Parallel Processing of Broad-Band PPM Signals
NASA Technical Reports Server (NTRS)
Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement
2010-01-01
A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).
40 CFR 98.440 - Definition of the source category.
Code of Federal Regulations, 2011 CFR
2011-07-01
... comprises any well or group of wells that inject a CO2 stream for long-term containment in subsurface... where a CO2 stream is being injected in subsurface geologic formations to enhance the recovery of oil or natural gas unless one of the following applies: (1) The owner or operator injects the CO2 stream for long...
40 CFR 98.440 - Definition of the source category.
Code of Federal Regulations, 2013 CFR
2013-07-01
... comprises any well or group of wells that inject a CO2 stream for long-term containment in subsurface... where a CO2 stream is being injected in subsurface geologic formations to enhance the recovery of oil or natural gas unless one of the following applies: (1) The owner or operator injects the CO2 stream for long...
40 CFR 98.440 - Definition of the source category.
Code of Federal Regulations, 2014 CFR
2014-07-01
... comprises any well or group of wells that inject a CO2 stream for long-term containment in subsurface... where a CO2 stream is being injected in subsurface geologic formations to enhance the recovery of oil or natural gas unless one of the following applies: (1) The owner or operator injects the CO2 stream for long...
40 CFR 98.440 - Definition of the source category.
Code of Federal Regulations, 2012 CFR
2012-07-01
... comprises any well or group of wells that inject a CO2 stream for long-term containment in subsurface... where a CO2 stream is being injected in subsurface geologic formations to enhance the recovery of oil or natural gas unless one of the following applies: (1) The owner or operator injects the CO2 stream for long...
Recurrent solar wind streams observed by interplanetary scintillation of 3C 48
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T.; Kakinuma, T.
1972-10-01
The interplanetary scintillation of 3C 48 was observed by two spaced receivers (69.3 MHz) during February and March 1971. The recurrent property of the observed velocity increase of the solar wind is clearly seen, and their recurrent period is 24 to 25 days. This value is shorter than the synodic period of 27 days, but this deviation may be explained by the displacement of the closest point to the Sun on the line of sight for 3C 48. A comparison with the data of the wind velocity obtained by apace probes shows that the observed enhancements are associated with twomore » high-velocity streams corotating around the Sun. The enhancements of the scintillation index precede by about two days the velocity enhancements, and it may be concluded that such enhancement of the scintillation index has resulted from the compressed region of the interplanetary plasma formed in front of the high-velocity corotating stream. (auth)« less
Stream processing health card application.
Polat, Seda; Gündem, Taflan Imre
2012-10-01
In this paper, we propose a data stream management system embedded to a smart card for handling and storing user specific summaries of streaming data coming from medical sensor measurements and/or other medical measurements. The data stream management system that we propose for a health card can handle the stream data rates of commonly known medical devices and sensors. It incorporates a type of context awareness feature that acts according to user specific information. The proposed system is cheap and provides security for private data by enhancing the capabilities of smart health cards. The stream data management system is tested on a real smart card using both synthetic and real data.
Determination of trunk streams via using flow accumulation values
NASA Astrophysics Data System (ADS)
Farek, Vladimir
2013-04-01
There is often a problem, with schematisation of catchments and a channel networks in a broken relief like sandstone landscape (with high vertical segmentation, narrow valley lines, crags, sheer rocks, endorheic hollows etc.). Usual hydrological parameters (subcatchment areas, altitude of highest point of subcatchment, water discharge), which are mostly used for determination of trunk stream upstream the junction, are frequently not utilizable very well in this kind of relief. We found, that for small, relatively homogeneous catchments (within the meaning of land-use, geological subsurface, anthropogenic influence etc.), which are extremely shaped, the value called "flow accumulation" (FA) could be very useful. This value gives the number of cells of the Digital Elevation Model (DEM) grid, which are drained to each cell of the catchment. We can predict that the stream channel with higher values of flow accumulation represents the main stream. There are three crucial issues with this theory. At first it is necessary to find the most suitable algorithm for calculation flow accumulation in a broken relief. Various algorithms could have complications with correct flow routing (representation of divergent or convergent character of the flow), or with keeping the flow paths uninterrupted. Relief with high curvature changes (alternating concave/convex shapes, high steepness changes) causes interrupting of flow lines in many algorithms used for hydrological computing. Second - set down limits of this theory (e.g. the size and character of a surveyed catchment). Third - verify this theory in reality. We tested this theory on sandstone landscape of National park Czech Switzerland. The main data source were high-resolution LIDAR (Light Detection and Ranging) DEM snapshots of surveyed area. This data comes from TU Dresden project called Genesis (Geoinformation Networks For The Cross- Border National Park Region Saxon- Bohemian Switzerland). In order to solve these issues GIS applications (e. g. GIS GRASS and its hydrological modules like r.terraflow, r.watershed, r.flow etc.) are very useful. Key words: channel network, flow accumulation, Digital Elevation Model, LIDAR, broken relief, GIS GRASS
Approximate Model for Turbulent Stagnation Point Flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechant, Lawrence
2016-01-01
Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near themore » stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.« less
Rehabilitating agricultural streams in Australia with wood: a review.
Lester, Rebecca E; Boulton, Andrew J
2008-08-01
Worldwide, the ecological condition of streams and rivers has been impaired by agricultural practices such as broadscale modification of catchments, high nutrient and sediment inputs, loss of riparian vegetation, and altered hydrology. Typical responses include channel incision, excessive sedimentation, declining water quality, and loss of in-stream habitat complexity and biodiversity. We review these impacts, focusing on the potential benefits and limitations of wood reintroduction as a transitional rehabilitation technique in these agricultural landscapes using Australian examples. In streams, wood plays key roles in shaping velocity and sedimentation profiles, forming pools, and strengthening banks. In the simplified channels typical of many agricultural streams, wood provides habitat for fauna, substrate for biofilms, and refuge from predators and flow extremes, and enhances in-stream diversity of fish and macroinvertebrates.Most previous restoration studies involving wood reintroduction have been in forested landscapes, but some results might be extrapolated to agricultural streams. In these studies, wood enhanced diversity of fish and macroinvertebrates, increased storage of organic material and sediment, and improved bed and bank stability. Failure to meet restoration objectives appeared most likely where channel incision was severe and in highly degraded environments. Methods for wood reintroduction have logistical advantages over many other restoration techniques, being relatively low cost and low maintenance. Wood reintroduction is a viable transitional restoration technique for agricultural landscapes likely to rapidly improve stream condition if sources of colonists are viable and water quality is suitable.
Enhanced round robin CPU scheduling with burst time based time quantum
NASA Astrophysics Data System (ADS)
Indusree, J. R.; Prabadevi, B.
2017-11-01
Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.
Water cycle algorithm: A detailed standard code
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.
Alves, Julio Cesar L; Henriques, Claudete B; Poppi, Ronei J
2014-01-03
The use of near infrared (NIR) spectroscopy combined with chemometric methods have been widely used in petroleum and petrochemical industry and provides suitable methods for process control and quality control. The algorithm support vector machines (SVM) has demonstrated to be a powerful chemometric tool for development of classification models due to its ability to nonlinear modeling and with high generalization capability and these characteristics can be especially important for treating near infrared (NIR) spectroscopy data of complex mixtures such as petroleum refinery streams. In this work, a study on the performance of the support vector machines algorithm for classification was carried out, using C-SVC and ν-SVC, applied to near infrared (NIR) spectroscopy data of different types of streams that make up the diesel pool in a petroleum refinery: light gas oil, heavy gas oil, hydrotreated diesel, kerosene, heavy naphtha and external diesel. In addition to these six streams, the diesel final blend produced in the refinery was added to complete the data set. C-SVC and ν-SVC classification models with 2, 4, 6 and 7 classes were developed for comparison between its results and also for comparison with the soft independent modeling of class analogy (SIMCA) models results. It is demonstrated the superior performance of SVC models especially using ν-SVC for development of classification models for 6 and 7 classes leading to an improvement of sensitivity on validation sample sets of 24% and 15%, respectively, when compared to SIMCA models, providing better identification of chemical compositions of different diesel pool refinery streams. Copyright © 2013 Elsevier B.V. All rights reserved.
Layer-based buffer aware rate adaptation design for SHVC video streaming
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan
2016-09-01
This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.
A secure transmission scheme of streaming media based on the encrypted control message
NASA Astrophysics Data System (ADS)
Li, Bing; Jin, Zhigang; Shu, Yantai; Yu, Li
2007-09-01
As the use of streaming media applications increased dramatically in recent years, streaming media security becomes an important presumption, protecting the privacy. This paper proposes a new encryption scheme in view of characteristics of streaming media and the disadvantage of the living method: encrypt the control message in the streaming media with the high security lever and permute and confuse the data which is non control message according to the corresponding control message. Here the so-called control message refers to the key data of the streaming media, including the streaming media header and the header of the video frame, and the seed key. We encrypt the control message using the public key encryption algorithm which can provide high security lever, such as RSA. At the same time we make use of the seed key to generate key stream, from which the permutation list P responding to GOP (group of picture) is derived. The plain text of the non-control message XORs the key stream and gets the middle cipher text. And then obtained one is permutated according to P. In contrast the decryption process is the inverse process of the above. We have set up a testbed for the above scheme and found our scheme is six to eight times faster than the conventional method. It can be applied not only between PCs but also between handheld devices.
NASA Astrophysics Data System (ADS)
Plachta, Kamil
2016-04-01
The paper presents a new algorithm that uses a combination of two models of BRDF functions: Torrance-Sparrow model and HTSG model. The knowledge of technical parameters of a surface is especially useful in the construction of the solar concentrator. The concentrator directs the reflected solar radiation on the surface of photovoltaic panels, increasing the amount of incident radiance. The software applying algorithm allows to calculate surface parameters of the solar concentrator. Performed simulation showing the share of diffuse component and directional component in reflected stream for surfaces made from particular materials. The impact of share of each component in reflected stream on the efficiency of the solar concentrator and photovoltaic surface has also been described. Subsequently, simulation change the value of voltage, current and power output of monocrystalline photovoltaic panels installed in a solar concentrator system has been made for selected surface of materials solar concentrator.
Real-time aerodynamic heating and surface temperature calculations for hypersonic flight simulation
NASA Technical Reports Server (NTRS)
Quinn, Robert D.; Gong, Leslie
1990-01-01
A real-time heating algorithm was derived and installed on the Ames Research Center Dryden Flight Research Facility real-time flight simulator. This program can calculate two- and three-dimensional stagnation point surface heating rates and surface temperatures. The two-dimensional calculations can be made with or without leading-edge sweep. In addition, upper and lower surface heating rates and surface temperatures for flat plates, wedges, and cones can be calculated. Laminar or turbulent heating can be calculated, with boundary-layer transition made a function of free-stream Reynolds number and free-stream Mach number. Real-time heating rates and surface temperatures calculated for a generic hypersonic vehicle are presented and compared with more exact values computed by a batch aeroheating program. As these comparisons show, the heating algorithm used on the flight simulator calculates surface heating rates and temperatures well within the accuracy required to evaluate flight profiles for acceptable heating trajectories.
NASA Technical Reports Server (NTRS)
Doxley, Charles A.
2016-01-01
In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.
Algorithm applying a modified BRDF function in Λ-ridge concentrator of solar radiation
NASA Astrophysics Data System (ADS)
Plachta, Kamil
2015-05-01
This paper presents an algorithm that uses the modified BRDF function. It allows the calculation of the parameters of Λ-ridge concentrator system. The concentrator directs reflected solar radiation on photovoltaic surface, increasing its efficiency. The efficiency of the concentrator depends on the surface characteristics of the material which it is made of, the angle of the photovoltaic panel and the resolution of the tracking system. It shows a method of modeling the surface by using the BRDF function and describes its basic parameters, e.g. roughness and the components of the reflected stream. A cost calculation of chosen models with presented in this article BRDF function modification has been made. The author's own simulation program allows to choose the appropriate material for construction of a Λ-ridge concentrator, generate micro surface of the material, and simulate the shape and components of the reflected stream.
A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"
NASA Astrophysics Data System (ADS)
Das, Rig; Tuithung, Themrichon
2013-03-01
This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table
Object-based media and stream-based computing
NASA Astrophysics Data System (ADS)
Bove, V. Michael, Jr.
1998-03-01
Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.
Shot boundary detection and label propagation for spatio-temporal video segmentation
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David
2015-02-01
This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.
Two-stream Convolutional Neural Network for Methane Emissions Quantification
NASA Astrophysics Data System (ADS)
Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.
2017-12-01
Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair programs.
NASA Astrophysics Data System (ADS)
Townsend, S. L.; Ziegler, S. E.
2005-05-01
The effect of solar radiation on dissolved organic matter (DOM) utilization was studied in two contrasting streams from June 2002 through October 2004. Moores Creek is an agricultural stream with elevated nutrient and dissolved organic carbon (DOC) concentrations. Huey Hollow is a forested stream with low nutrient and DOC concentrations. A series of experiments were conducted seasonally to assess how solar radiation influenced DOM utilization. Exposure of DOM to solar radiation significantly decreased its utilization during most seasons in both streams. Each stream experienced one seasonal period when exposure of DOM significantly increased bacterial production; during these periods, DOM appeared to be the least bioavailable and most photochemically reactive. Interestingly, in spring when bioavailability of DOM was lowest in Moores Creek solar radiation exposure further reduced DOM bioavailability. Elevated ammonium concentrations during this spring experiment suggest photochemically-enhanced humification may have been an important mechanism influencing DOM cycling. Bioassays using 15N-labeled ammonium indicated no significant effect of elevated ammonium on the utilization of DOM in either stream in fall 2004. Detection of elevated 15N in the DOM fractions, however, would reveal light stimulated humification under elevated ammonium concentrations not detected with the bioassay.
Klunder, Edgar B [Bethel Park, PA
2011-08-09
The method relates to particle separation from a feed stream. The feed stream is injected directly into the froth zone of a vertical flotation column in the presence of a counter-current reflux stream. A froth breaker generates a reflux stream and a concentrate stream, and the reflux stream is injected into the froth zone to mix with the interstitial liquid between bubbles in the froth zone. Counter-current flow between the plurality of bubbles and the interstitial liquid facilitates the attachment of higher hydrophobicity particles to bubble surfaces as lower hydrophobicity particles detach. The height of the feed stream injection and the reflux ratio may be varied in order to optimize the concentrate or tailing stream recoveries desired based on existing operating conditions.
OLYMPEX Data Workshop: GPM View
NASA Technical Reports Server (NTRS)
Petersen, W.
2017-01-01
OLYMPEX Primary Objectives: Datasets to enable: (1) Direct validation over complex terrain at multiple scales, liquid and frozen precip types, (a) Do we capture terrain and synoptic regime transitions, orographic enhancements/structure, full range of precipitation intensity (e.g., very light to heavy) and types, spatial variability? (b) How well can we estimate space/time-accumulated precipitation over terrain (liquid + frozen)? (2) Physical validation of algorithms in mid-latitude cold season frontal systems over ocean and complex terrain, (a) What are the column properties of frozen, melting, liquid hydrometeors-their relative contributions to estimated surface precipitation, transition under the influence of terrain gradients, and systematic variability as a function of synoptic regime? (3) Integrated hydrologic validation in complex terrain, (a) Can satellite estimates be combined with modeling over complex topography to drive improved products (assimilation, downscaling) [Level IV products] (b) What are capabilities and limitations for use of satellite-based precipitation estimates in stream/river flow forecasting?
Dust-penetrating (DUSPEN) see-through lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfredson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-10-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Dust-Penetrating (DUSPEN) "see-through" lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfreson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-05-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Infrared traffic image enhancement algorithm based on dark channel prior and gamma correction
NASA Astrophysics Data System (ADS)
Zheng, Lintao; Shi, Hengliang; Gu, Ming
2017-07-01
The infrared traffic image acquired by the intelligent traffic surveillance equipment has low contrast, little hierarchical differences in perceptions of image and the blurred vision effect. Therefore, infrared traffic image enhancement, being an indispensable key step, is applied to nearly all infrared imaging based traffic engineering applications. In this paper, we propose an infrared traffic image enhancement algorithm that is based on dark channel prior and gamma correction. In existing research dark channel prior, known as a famous image dehazing method, here is used to do infrared image enhancement for the first time. Initially, in the proposed algorithm, the original degraded infrared traffic image is transformed with dark channel prior as the initial enhanced result. A further adjustment based on the gamma curve is needed because initial enhanced result has lower brightness. Comprehensive validation experiments reveal that the proposed algorithm outperforms the current state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Meneghello, Gianluca; Beyhaghi, Pooriya; Bewley, Thomas
2016-11-01
The identification of an optimized hydrofoil shape depends on an accurate characterization of both its geometry and the incoming, turbulent, free-stream flow. We analyze this dependence using the computationally inexpensive vortex lattice model implemented in AVL, coupled with the recently developed global, derivative-free optimization algorithm implemented in Δ - DOGS . Particular attention will be given to the effect of the free-stream turbulence level - as modeled by a change in the viscous drag coefficients - on the optimized values of the parameters describing the three dimensional shape of the foil. Because the simplicity of AVL, when contrasted with more complex and computationally expensive LES or RANS models, may cast doubts on its usefulness, its validity and limitations will be discussed by comparison with water tank measurement, and again taking into account the effect of the uncertainty in the free-stream characterization.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
Atmospheric responses to sensible and latent heating fluxes over the Gulf Stream
NASA Astrophysics Data System (ADS)
Minobe, S.; Ida, T.; Takatama, K.
2016-12-01
Air-sea interaction over mid-latitude oceanic fronts such as the Gulf Stream attracted large attention in the last decade. Observational analyses and modelling studies revealed that atmospheric responses over the Gulf Stream including surface wind convergence, enhanced precipitation and updraft penetrating to middle-to-upper troposphere roughly on the Gulf Stream current axis or on the warmer flank of sea-surface temperature (SST) front of the Gulf Stream . For these atmospheric responses, oceanic information should be transmitted to the atmosphere via turbulent heat fluxes, and thus the mechanisms for atmospheric responses can be understood better by examining latent and sensible air-sea heat fluxes more closely. Thus, the roles of the sensible and latent heat fluxes are examined by conducting a series of numerical experiments using the IPRC Regional Atmospheric Model over the Gulf Stream by applying SST smoothing for latent and sensible heating separately. The results indicate that the sensible and latent heat fluxes affect the atmosphere differently. Sensible heat flux intensifies surface wind convergence to produce sea-level pressure (SLP) anomaly. Latent heat flux supplies moistures and maintains enhanced precipitation. The different heat flux components cause upward wind velocity at different levels.
NASA Astrophysics Data System (ADS)
Budiman, M. A.; Amalia; Chayanie, N. I.
2018-03-01
Cryptography is the art and science of using mathematical methods to preserve message security. There are two types of cryptography, namely classical and modern cryptography. Nowadays, most people would rather use modern cryptography than classical cryptography because it is harder to break than the classical one. One of classical algorithm is the Zig-zag algorithm that uses the transposition technique: the original message is unreadable unless the person has the key to decrypt the message. To improve the security, the Zig-zag Cipher is combined with RC4+ Cipher which is one of the symmetric key algorithms in the form of stream cipher. The two algorithms are combined to make a super-encryption. By combining these two algorithms, the message will be harder to break by a cryptanalyst. The result showed that complexity of the combined algorithm is θ(n2 ), while the complexity of Zig-zag Cipher and RC4+ Cipher are θ(n2 ) and θ(n), respectively.
The Effectiveness of Neurofeedback Training in Algorithmic Thinking Skills Enhancement.
Plerou, Antonia; Vlamos, Panayiotis; Triantafillidis, Chris
2017-01-01
Although research on learning difficulties are overall in an advanced stage, studies related to algorithmic thinking difficulties are limited, since interest in this field has been recently raised. In this paper, an interactive evaluation screener enhanced with neurofeedback elements, referring to algorithmic tasks solving evaluation, is proposed. The effect of HCI, color, narration and neurofeedback elements effect was evaluated in the case of algorithmic tasks assessment. Results suggest the enhanced performance in the case of neurofeedback trained group in terms of total correct and optimal algorithmic tasks solution. Furthermore, findings suggest that skills, concerning the way that an algorithm is conceived, designed, applied and evaluated are essentially improved.
A new stream function formulation for the Euler equations
NASA Technical Reports Server (NTRS)
Atkins, H. L.; Hassan, H. A.
1983-01-01
A new stream function formulation is developed for the solution of Euler's equations in the transonic flow region. The stream function and the density are the dependent variables in this method, while the governing equations for adiabatic flow are the momentum equations which are solved in the strong conservation law form. The application of this method does not require a knowledge of the vorticity. The algorithm is combined with the automatic grid solver (GRAPE) of Steger and Sorenson (1979) in order to study arbitrary geometries. Results of the application of this method are presented for the NACA 0012 airfoil at various Mach numbers and angles of attack, and cylinders. In addition, detailed comparisons are made with other solutions of the Euler equations.
Ambient occlusion effects for combined volumes and tubular geometry.
Schott, Mathias; Martin, Tobias; Grosset, A V Pascal; Smith, Sean T; Hansen, Charles D
2013-06-01
This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.
A decision support system using combined-classifier for high-speed data stream in smart grid
NASA Astrophysics Data System (ADS)
Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun
2016-11-01
Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.
Ambient Occlusion Effects for Combined Volumes and Tubular Geometry
Schott, Mathias; Martin, Tobias; Grosset, A.V. Pascal; Smith, Sean T.; Hansen, Charles D.
2013-01-01
This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed. PMID:23559506
Some Practical Universal Noiseless Coding Techniques
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1994-01-01
Report discusses noiseless data-compression-coding algorithms, performance characteristics and practical consideration in implementation of algorithms in coding modules composed of very-large-scale integrated circuits. Report also has value as tutorial document on data-compression-coding concepts. Coding techniques and concepts in question "universal" in sense that, in principle, applicable to streams of data from variety of sources. However, discussion oriented toward compression of high-rate data generated by spaceborne sensors for lower-rate transmission back to earth.
Algorithm-based arterial blood sampling recognition increasing safety in point-of-care diagnostics.
Peter, Jörg; Klingert, Wilfried; Klingert, Kathrin; Thiel, Karolin; Wulff, Daniel; Königsrainer, Alfred; Rosenstiel, Wolfgang; Schenk, Martin
2017-08-04
To detect blood withdrawal for patients with arterial blood pressure monitoring to increase patient safety and provide better sample dating. Blood pressure information obtained from a patient monitor was fed as a real-time data stream to an experimental medical framework. This framework was connected to an analytical application which observes changes in systolic, diastolic and mean pressure to determine anomalies in the continuous data stream. Detection was based on an increased mean blood pressure caused by the closing of the withdrawal three-way tap and an absence of systolic and diastolic measurements during this manipulation. For evaluation of the proposed algorithm, measured data from animal studies in healthy pigs were used. Using this novel approach for processing real-time measurement data of arterial pressure monitoring, the exact time of blood withdrawal could be successfully detected retrospectively and in real-time. The algorithm was able to detect 422 of 434 (97%) blood withdrawals for blood gas analysis in the retrospective analysis of 7 study trials. Additionally, 64 sampling events for other procedures like laboratory and activated clotting time analyses were detected. The proposed algorithm achieved a sensitivity of 0.97, a precision of 0.96 and an F1 score of 0.97. Arterial blood pressure monitoring data can be used to perform an accurate identification of individual blood samplings in order to reduce sample mix-ups and thereby increase patient safety.
Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology
NASA Astrophysics Data System (ADS)
Jia, Wen-bin; Xiao, Fu-hai
2013-03-01
The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.
Methanation of gas streams containing carbon monoxide and hydrogen
Frost, Albert C.
1983-01-01
Carbon monoxide-containing gas streams having a relatively high concentration of hydrogen are pretreated so as to remove the hydrogen in a recoverable form for use in the second step of a cyclic, essentially two-step process for the production of methane. The thus-treated streams are then passed over a catalyst to deposit a surface layer of active surface carbon thereon essentially without the formation of inactive coke. This active carbon is reacted with said hydrogen removed from the feed gas stream to form methane. The utilization of the CO in the feed gas stream is appreciably increased, enhancing the overall process for the production of relatively pure, low-cost methane from CO-containing waste gas streams.
Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm
NASA Astrophysics Data System (ADS)
Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.
2017-12-01
Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.
Online cross-validation-based ensemble learning.
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2018-01-30
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Formation of the predicted training parameters in the form of a discrete information stream
NASA Astrophysics Data System (ADS)
Smolentseva, T. E.; Sumin, V. I.; Zolnikov, V. K.; Lavlinsky, V. V.
2018-03-01
In work process of training in the form of a discrete information stream is considered. On each of stages of the considered process portions of the training information and quality of their assimilation are analysed. Individual characteristics and reaction trained for every portion of information on appropriate sections are defined. The control algorithm of training with the predicted number of control checks of the trainee who allows to define what operating influence is considered it is necessary to create for the trainee. On the basis of this algorithm the vector of probabilities of ignorance of elements of the training information is received. As a result of the conducted researches the algorithm on formation of the predicted training parameters is developed. In work the task of comparison of duration of training received experimentally with predicted on the basis of it is solved the conclusion is drawn on efficiency of formation of the predicted training parameters. The program complex on the basis of the values of individual parameters received as a result of experiments on each trainee who allows to calculate individual characteristics is developed, to form rating and to monitor process of change of parameters of training.
Ecological Responses to Trout Habitat Rehabilitation in a Northern Michigan Stream
NASA Astrophysics Data System (ADS)
Rosi-Marshall, Emma J.; Moerke, Ashley H.; Lamberti, Gary A.
2006-07-01
Monitoring of stream restoration projects is often limited and success often focuses on a single taxon (e.g., salmonids), even though other aspects of stream structure and function may also respond to restoration activities. The Ottawa National Forest (ONF), Michigan, conducted a site-specific trout habitat improvement to enhance the trout fishery in Cook’s Run, a 3rd-order stream that the ONF determined was negatively affected by past logging. Our objectives were to determine if the habitat improvement increased trout abundances and enhanced other ecological variables (overall habitat quality, organic matter retention, seston concentration, periphyton abundance, sediment organic matter content, and macroinvertebrate abundance and diversity) following rehabilitation. The addition of skybooms (underbank cover structures) and k-dams (pool-creating structures) increased the relative abundance of harvestable trout (>25 cm in total length) as intended but not overall trout abundances. Both rehabilitation techniques also increased maximum channel depth and organic matter retention, but only k-dams increased overall habitat quality. Neither approach significantly affected other ecological variables. The modest ecological response to this habitat improvement likely occurred because the system was not severely degraded beforehand, and thus small, local changes in habitat did not measurably affect most physical and ecological variables measured. However, increases in habitat volume and in organic matter retention may enhance stream biota in the long term.
A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs
NASA Astrophysics Data System (ADS)
Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng
2011-12-01
With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.
JXTA: A Technology Facilitating Mobile P2P Health Management System
Rajkumar, Rajasekaran; Nallani Chackravatula Sriman, Narayana Iyengar
2012-01-01
Objectives Mobile JXTA (Juxtapose) gaining momentum and has attracted the interest of doctors and patients through P2P service that transmits messages. Audio and video can also be transmitted through JXTA. The use of mobile streaming mechanism with the support of mobile hospital management and healthcare system would enable better interaction between doctors, nurses, and the hospital. Experimental results demonstrate good performance in comparison with conventional systems. This study evaluates P2P JXTA/JXME (JXTA functionality to MIDP devices.) which facilitates peer-to-peer application+ using mobile-constraint devices. Also a proven learning algorithm was used to automatically send and process sorted patient data to nurses. Methods From December 2010 to December 2011, a total of 500 patients were referred to our hospital due to minor health problems and were monitored. We selected all of the peer groups and the control server, which controlled the BMO (Block Medical Officer) peer groups and BMO through the doctor peer groups, and prescriptions were delivered to the patient’s mobile phones through the JXTA/ JXME network. Results All 500 patients were registered in the JXTA network. Among these, 300 patient histories were referred to the record peer group by the doctors, 100 patients were referred to the external doctor peer group, and 100 patients were registered as new users in the JXTA/JXME network. Conclusion This system was developed for mobile streaming applications and was designed to support the mobile health management system using JXTA/ JXME. The simulated results show that this system can carry out streaming audio and video applications. Controlling and monitoring by the doctor peer group makes the system more flexible and structured. Enhanced studies are needed to improve knowledge mining and cloud-based M health management technology in comparison with the traditional system. PMID:24159509
How Will Climate Change Affect Explosive Cyclones in the Extratropics of the Northern Hemisphere?
NASA Astrophysics Data System (ADS)
Seiler, C.; Zwiers, F. W.
2015-12-01
Explosive cyclones are rapidly intensifying low pressure systems generating severe wind speeds and heavy precipitation primarily in coastal and marine environments, such as the March 2014 nor'easter which developed along the United States coastline, with hurricane force winds in eastern Maine and the Maritimes. This study presents the first analysis on how explosive cyclones respond to climate change in the extratropics of the Northern Hemisphere. An objective-feature tracking algorithm is used to identify and track cyclones from 23 CMIP5 climate models for the recent past (1981-1999) and future (2081-2099). Explosive cyclones are projected to shift northwards by about 2.2° latitude on average in the northern Pacific, with fewer and weaker events south of 45°N, and more frequent and stronger events north of this latitude. This shift is correlated with a poleward shift of the jet stream in the inter-model spread (R = 0.56). In the Atlantic, the total number of explosive cyclones is projected to decrease by about 17% when averaging across models, with the largest changes occurring along North America's East Coast. This reduction is correlated with a decline in the lower-tropospheric Eady growth rate (R = 0.51), and is stronger for models with smaller frequency biases (R = -0.65). The same region is also projected to experience a small intensification of explosive cyclones, with larger vorticity values for models that predict stronger increases in the speed of the jet stream (R = 0.58). This strengthening of the jet stream is correlated with an enhanced sea surface temperature gradient in the North Atlantic (R = -0.63). The inverse relationship between model bias and projection, and the role of model resolution are discussed.
Compensatory stream and wetland mitigation in North Carolina: an evaluation of regulatory success.
Hill, Tammy; Kulz, Eric; Munoz, Breda; Dorney, John R
2013-05-01
Data from a probability sample were used to estimate wetland and stream mitigation success from 2007 to 2009 across North Carolina (NC). "Success" was defined as whether the mitigation site met regulatory requirements in place at the time of construction. Analytical results were weighted by both component counts and mitigation size. Overall mitigation success (including preservation) was estimated at 74 % (SE = 3 %) for wetlands and 75 % (SE = 4 %) for streams in NC. Compared to the results of previous studies, wetland mitigation success rates had increased since the mid-1990s. Differences between mitigation providers (mitigation banks, NC Ecosystem Enhancement Program's design-bid-build and full-delivery programs, NC Department of Transportation and private permittee-responsible mitigation) were generally not significant although permittee-responsible mitigation yielded higher success rates in certain circumstances. Both wetland and stream preservation showed high rates of success and the stream enhancement success rate was significantly higher than that of stream restoration. Additional statistically significant differences when mitigation size was considered included: (1) the Piedmont yielded a lower stream mitigation success rate than other areas of the state, and (2) recently constructed wetland mitigation projects demonstrated a lower success rate than those built prior to 2002. Opportunities for improvement exist in the areas of regulatory record-keeping, understanding the relationship between post-construction establishment and long-term ecological trajectories of stream and wetland restoration projects, incorporation of numeric ecological metrics into mitigation monitoring and success criteria, and adaptation of stream mitigation designs to achieve greater success in the Piedmont.
Roberts, B.J.; Mulholland, P.J.; Houser, J.N.
2007-01-01
Delivery of water, sediments, nutrients, and organic matter to stream ecosystems is strongly influenced by the catchment of the stream and can be altered greatly by upland soil and vegetation disturbance. At the Fort Benning Military Installation (near Columbus, Georgia), spatial variability in intensity of military training results in a wide range of intensities of upland disturbance in stream catchments. A set of 8 streams in catchments spanning this upland disturbance gradient was selected for investigation of the impact of disturbance intensity on hydrodynamics and nutrient uptake. The size of transient storage zones and rates of NH4+ uptake in all study streams were among the lowest reported in the literature. Upland disturbance did not appear to influence stream hydrodynamics strongly, but it caused significant decreases in instream nutrient uptake. In October 2003, coarse woody debris (CWD) was added to 1/2 of the study streams (spanning the disturbance gradient) in an attempt to increase hydrodynamic and structural complexity, with the goals of enhancing biotic habitat and increasing nutrient uptake rates. CWD additions had positive short-term (within 1 mo) effects on hydrodynamic complexity (water velocity decreased and transient storage zone cross-sectional area, relative size of the transient storage zone, fraction of the median travel time attributable to transient storage over a standardized length of 200 m, and the hydraulic retention factor increased) and nutrient uptake (NH4+ uptake rates increased). Our results suggest that water quality in streams with intense upland disturbances can be improved by enhancing instream biotic nutrient uptake capacity through measures such as restoring stream CWD. ?? 2007 by The North American Benthological Society.
NASA Astrophysics Data System (ADS)
Dogrul, E. C.; Brush, C. F.; Kadir, T. N.
2006-12-01
The Integrated Water Flow Model (IWFM) is a comprehensive input-driven application for simulating groundwater flow, surface water flow and land-surface hydrologic processes, and interactions between these processes, developed by the California Department of Water Resources (DWR). IWFM couples a 3-D finite element groundwater flow process and 1-D land surface, lake, stream flow and vertical unsaturated-zone flow processes which are solved simultaneously at each time step. The groundwater flow system is simulated as a multilayer aquifer system with a mixture of confined and unconfined aquifers separated by semiconfining layers. The groundwater flow process can simulate changing aquifer conditions (confined to unconfined and vice versa), subsidence, tile drains, injection wells and pumping wells. The land surface process calculates elemental water budgets for agricultural, urban, riparian and native vegetation classes. Crop water demands are dynamically calculated using distributed soil properties, land use and crop data, and precipitation and evapotranspiration rates. The crop mix can also be automatically modified as a function of pumping lift using logit functions. Surface water diversions and groundwater pumping can each be specified, or can be automatically adjusted at run time to balance water supply with water demand. The land-surface process also routes runoff to streams and deep percolation to the unsaturated zone. Surface water networks are specified as a series of stream nodes (coincident with groundwater nodes) with specified bed elevation, conductance and stage-flow relationships. Stream nodes are linked to form stream reaches. Stream inflows at the model boundary, surface water diversion locations, and one or more surface water deliveries per location are specified. IWFM routes stream flows through the network, calculating groundwater-surface water interactions, accumulating inflows from runoff, and allocating available stream flows to meet specified or calculated deliveries. IWFM utilizes a very straight-forward input file structure, allowing rapid development of complex simulations. A key feature of IWFM is a new algorithm for computation of groundwater flow across element faces. Enhancements to version 3.0 include automatic time-tracking of input and output data sets, linkage with the HEC-DSS database, and dynamic crop allocation using logit functions. Utilities linking IWFM to the PEST automated calibration suite are also available. All source code, executables and documentation are available for download from the DWR web site. IWFM is currently being used to develop hydrologic simulations of California's Central Valley (C2VSIM); the west side of California's San Joaquin Valley (WESTSIM); Butte County, CA; Solano County, CA; Merced County, CA; and the Oregon side of the Walla Walla River Basin.
Node synchronization schemes for the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Swanson, L.; Arnold, S.
1992-01-01
The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.
NASA Astrophysics Data System (ADS)
Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.
2009-12-01
The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Offline analysis on Swiss and California waveform datasets indicate that the multiple-threshold approach is faster and more reliable for larger events than the earlier version of the VS codes. This multiple-threshold approach is well-suited for implementation on a wide range of devices, from embedded processor systems installed at a seismic stations, to small autonomous networks for local warnings, to large-scale regional networks such as the CISN. In addition, we quantify the influence of systematic use of prior information and Vs30-based corrections for site amplification on VS magnitude estimation performance, and describe how components of the VS algorithm will be integrated into non-EEW standard network processing procedures at CHNet, the national broadband / strong motion network in Switzerland. These enhancements to the VS codes will be transitioned from off-line to real-time testing at CHNet in Europe in the coming months, and will be incorporated into the development of key components of CISN ShakeAlert prototype system in California.
NASA Astrophysics Data System (ADS)
Shoupeng, Song; Zhou, Jiang
2017-03-01
Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.
Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology
NASA Technical Reports Server (NTRS)
Larson, Stephen M.; Slaughter, Charles D.
1992-01-01
Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.
Viswanathan, P; Krishna, P Venkata
2014-05-01
Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.
NASA Astrophysics Data System (ADS)
Chai, Xiu-Li; Gan, Zhi-Hua; Lu, Yang; Zhang, Miao-Hui; Chen, Yi-Ran
2016-10-01
Recently, many image encryption algorithms based on chaos have been proposed. Most of the previous algorithms encrypt components R, G, and B of color images independently and neglect the high correlation between them. In the paper, a novel color image encryption algorithm is introduced. The 24 bit planes of components R, G, and B of the color plain image are obtained and recombined into 4 compound bit planes, and this can make the three components affect each other. A four-dimensional (4D) memristive hyperchaotic system generates the pseudorandom key streams and its initial values come from the SHA 256 hash value of the color plain image. The compound bit planes and key streams are confused according to the principles of genetic recombination, then confusion and diffusion as a union are applied to the bit planes, and the color cipher image is obtained. Experimental results and security analyses demonstrate that the proposed algorithm is secure and effective so that it may be adopted for secure communication. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203094 and 61305042), the Natural Science Foundation of the United States (Grant Nos. CNS-1253424 and ECCS-1202225), the Science and Technology Foundation of Henan Province, China (Grant No. 152102210048), the Foundation and Frontier Project of Henan Province, China (Grant No. 162300410196), the Natural Science Foundation of Educational Committee of Henan Province, China (Grant No. 14A413015), and the Research Foundation of Henan University, China (Grant No. xxjc20140006).
NASA Astrophysics Data System (ADS)
Cua, G. B.; Fischer, M.; Caprio, M.; Heaton, T. H.; Cisn Earthquake Early Warning Project Team
2010-12-01
The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system that could potentially be implemented in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network since July 2008, and at the Northern California Seismic Network since February 2009. We discuss recent enhancements to the VS EEW algorithm that are being integrated into CISN ShakeAlert. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to initiate an event declaration, with the goal of reducing false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and the requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) into an on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Real-time and offline analysis on Swiss and California waveform datasets indicate that the multiple-threshold approach is faster and more reliable for larger events than the earlier version of the VS codes. In addition, we provide evolutionary estimates of the probability of false alarms (PFA), which is an envisioned output stream of the CISN ShakeAlert system. The real-time decision-making approach envisioned for CISN ShakeAlert users, where users specify a threshhold PFA in addition to thresholds on peak ground motion estimates, has the potential to increase the available warning time for users with high tolerance to false alarms without compromising the needs of users with lower tolerances to false alarms.
An enhanced DWBA algorithm in hybrid WDM/TDM EPON networks with heterogeneous propagation delays
NASA Astrophysics Data System (ADS)
Li, Chengjun; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2011-12-01
An enhanced dynamic wavelength and bandwidth allocation (DWBA) algorithm in hybrid WDM/TDM PON is proposed and experimentally demonstrated. In addition to the fairness of bandwidth allocation, this algorithm also considers the varying propagation delays between ONUs and OLT. The simulation based on MATLAB indicates that the improved algorithm has a better performance compared with some other algorithms.
NASA Astrophysics Data System (ADS)
Bagherzadeh, Seyed Amin; Asadi, Davood
2017-05-01
In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.
GRay: A Massively Parallel GPU-based Code for Ray Tracing in Relativistic Spacetimes
NASA Astrophysics Data System (ADS)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
2013-11-01
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.
Clustering and Flow Conservation Monitoring Tool for Software Defined Networks.
Puente Fernández, Jesús Antonio; García Villalba, Luis Javier; Kim, Tai-Hoon
2018-04-03
Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.
An effective one-dimensional anisotropic fingerprint enhancement algorithm
NASA Astrophysics Data System (ADS)
Ye, Zhendong; Xie, Mei
2012-01-01
Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.
An effective one-dimensional anisotropic fingerprint enhancement algorithm
NASA Astrophysics Data System (ADS)
Ye, Zhendong; Xie, Mei
2011-12-01
Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.
Toward an Objective Enhanced-V Detection Algorithm
NASA Technical Reports Server (NTRS)
Moses, John F.; Brunner,Jason C.; Feltz, Wayne F.; Ackerman, Steven A.; Moses, John F.; Rabin, Robert M.
2007-01-01
The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V signature, has been observed to occur during and preceding severe weather. This study describes an algorithmic approach to objectively detect overshooting tops, temperature couplets, and enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of temperature, temperature difference, and distance thresholds for the overshooting top and temperature couplet detection parts of the algorithm and consists of cross correlation statistics of pixels for the enhanced-V detection part of the algorithm. The effectiveness of the overshooting top and temperature couplet detection components of the algorithm is examined using GOES and MODIS image data for case studies in the 2003-2006 seasons. The main goal is for the algorithm to be useful for operations with future sensors, such as GOES-R.
Um, Ki Sung; Kwak, Yun Sik; Cho, Hune; Kim, Il Kon
2005-11-01
A basic assumption of Health Level Seven (HL7) protocol is 'No limitation of message length'. However, most existing commercial HL7 interface engines do limit message length because they use the string array method, which is run in the main memory for the HL7 message parsing process. Specifically, messages with image and multi-media data create a long string array and thus cause the computer system to raise critical and fatal problem. Consequently, HL7 messages cannot handle the image and multi-media data necessary in modern medical records. This study aims to solve this problem with the 'streaming algorithm' method. This new method for HL7 message parsing applies the character-stream object which process character by character between the main memory and hard disk device with the consequence that the processing load on main memory could be alleviated. The main functions of this new engine are generating, parsing, validating, browsing, sending, and receiving HL7 messages. Also, the engine can parse and generate XML-formatted HL7 messages. This new HL7 engine successfully exchanged HL7 messages with 10 megabyte size images and discharge summary information between two university hospitals.
Improving Data Transfer Throughput with Direct Search Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaprakash, Prasanna; Morozov, Vitali; Kettimuthu, Rajkumar
2016-01-01
Improving data transfer throughput over high-speed long-distance networks has become increasingly difficult. Numerous factors such as nondeterministic congestion, dynamics of the transfer protocol, and multiuser and multitask source and destination endpoints, as well as interactions among these factors, contribute to this difficulty. A promising approach to improving throughput consists in using parallel streams at the application layer.We formulate and solve the problem of choosing the number of such streams from a mathematical optimization perspective. We propose the use of direct search methods, a class of easy-to-implement and light-weight mathematical optimization algorithms, to improve the performance of data transfers by dynamicallymore » adapting the number of parallel streams in a manner that does not require domain expertise, instrumentation, analytical models, or historic data. We apply our method to transfers performed with the GridFTP protocol, and illustrate the effectiveness of the proposed algorithm when used within Globus, a state-of-the-art data transfer tool, on productionWAN links and servers. We show that when compared to user default settings our direct search methods can achieve up to 10x performance improvement under certain conditions. We also show that our method can overcome performance degradation due to external compute and network load on source end points, a common scenario at high performance computing facilities.« less
Sea lamprey orient toward a source of a synthesized pheromone using odor-conditioned rheotaxis
Johnson, Nicholas S.; Muhammad, Azizah; Thompson, Henry; Choi, Jongeun; Li, Weiming
2012-01-01
Characterization of vertebrate chemo-orientation strategies over long distances is difficult because it is often not feasible to conduct highly controlled hypothesis-based experiments in natural environments. To overcome the challenge, we couple in-stream behavioral observations of female sea lampreys (Petromyzon marinus) orienting to plumes of a synthesized mating pheromone, 7a,12a,24-trihydroxy-5a-cholan-3-one-24-sulfate (3kPZS), and engineering algorithms to systematically test chemo-orientation hypotheses. In-stream field observations and simulated movements of female sea lampreys according to control algorithms support that odor-conditioned rheotaxis is a component of the mechanism used to track plumes of 3kPZS over hundreds of meters in flowing water. Simulated movements of female sea lampreys do not support that rheotaxis or klinotaxis alone is sufficient to enable the movement patterns displayed by females in locating 3kPZS sources in the experimental stream. Odor-conditioned rheotaxis may not only be effective at small spatial scales as previous described in crustaceans, but may also be effectively used by fishes over hundreds of meters. These results may prove useful for developing management strategies for the control of invasive species that exploit the odor-conditioned tracking behavior and for developing biologically inspired navigation strategies for robotic fish.
Robust High-dimensional Bioinformatics Data Streams Mining by ODR-ioVFDT
Wang, Dantong; Fong, Simon; Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Wong, Kelvin K. L.
2017-01-01
Outlier detection in bioinformatics data streaming mining has received significant attention by research communities in recent years. The problems of how to distinguish noise from an exception and deciding whether to discard it or to devise an extra decision path for accommodating it are causing dilemma. In this paper, we propose a novel algorithm called ODR with incrementally Optimized Very Fast Decision Tree (ODR-ioVFDT) for taking care of outliers in the progress of continuous data learning. By using an adaptive interquartile-range based identification method, a tolerance threshold is set. It is then used to judge if a data of exceptional value should be included for training or otherwise. This is different from the traditional outlier detection/removal approaches which are two separate steps in processing through the data. The proposed algorithm is tested using datasets of five bioinformatics scenarios and comparing the performance of our model and other ones without ODR. The results show that ODR-ioVFDT has better performance in classification accuracy, kappa statistics, and time consumption. The ODR-ioVFDT applied onto bioinformatics streaming data processing for detecting and quantifying the information of life phenomena, states, characters, variables and components of the organism can help to diagnose and treat disease more effectively. PMID:28230161
New Algorithm Identifies Tidal Streams Oriented Along our Line-of-Sight
NASA Astrophysics Data System (ADS)
Lin, Ziyi; Newberg, Heidi; Amy, Paul; Martin, Charles Harold; Rockcliffe, Keighley E.
2018-01-01
The known dwarf galaxy tidal streams in the Milky Way are primarily oriented perpendicular to our line-of-sight. That is because they are concentrated into an observable higher-surface-brightness feature at a particular distance, or because they tightly cluster in line-of-sight velocity in a particular direction. Streams that are oriented along our line-of-sight are spread over a large range of distances and velocities. However, these distances and velocities are correlated in predicable ways. We used a set of randomly oriented Milky Way orbits to develop a technique that bins stars in combinations of distance and velocity that are likely for tidal streams. We applied this technique to identify previously unknown tidal streams in a set of blue horizontal branch stars in the first quadrant from Data Release 10 of the Sloan Digital Sky Survey (SDSS). This project was supported by NSF grant AST 16-15688, a Rensselaer Presidential Fellowship, the NASA/NY Space Grant fellowship, and contributions made by The Marvin Clan, Babette Josephs, Manit Limlamai, and the 2015 Crowd Funding Campaign to Support Milky Way Research.
NASA Astrophysics Data System (ADS)
Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.
2009-12-01
Airborne bathymetric Light Detection and Ranging (LiDAR) systems designed for coastal and marine surveys are increasingly being deployed in fluvial environments. While the adaptation of this technology to rivers and streams would appear to be straightforward, currently technical challenges remain with regard to achieving high levels of vertical accuracy and precision when mapping bathymetry in shallow fluvial settings. Collectively these mapping errors have a direct bearing on hydraulic model predictions made using these data. We compared channel surveys conducted along the Platte River, Nebraska, and the Trinity River, California, using conventional ground-based methods with those made with the hybrid topographic/bathymetric Experimental Advanced Airborne Research LiDAR (EAARL). In the turbid and braided Platte River, a bathymetric-waveform processing algorithm was shown to enhance the definition of thalweg channels over a more simplified, first-surface waveform processing algorithm. Consequently flow simulations using data processed with the shallow bathymetric algorithm resulted in improved prediction of wetted area relative to the first-surface algorithm, when compared to the wetted area in concurrent aerial imagery. However, when compared to using conventionally collected data for flow modeling, the inundation extent was over predicted with the EAARL topography due to higher bed elevations measured by the LiDAR. In the relatively clear, meandering Trinity River, bathymetric processing algorithms were capable of defining a 3 meter deep pool. However, a similar bias in depth measurement was observed, with the LiDAR measuring the elevation of the river bottom above its actual position, resulting in a predicted water surface higher than that measured by field data. This contribution addresses the challenge of making bathymetric measurements with the EAARL in different environmental conditions encountered in fluvial settings, explores technical issues related to reliably detecting the water surface and river bottom, and illustrates the impact of using LiDAR data and current processing techniques to produce above and below water topographic surfaces for hydraulic modeling and habitat applications.
Aspects and applications of patched grid calculations
NASA Technical Reports Server (NTRS)
Walters, R. W.; Switzer, G. F.; Thomas, J. L.
1986-01-01
Patched grid calculations within the framework of an implicit, flux-vector split upwind/relaxation algorithm for the Euler equations are presented. The effect of a metric-discontinuous interface on the convergence rate of the algorithm is discussed along with the spatial accuracy of the solution and the effect of curvature along an interface. Results are presented and discussed for the free-stream problem, shock reflection problem, supersonic inlet with a 5 degree ramp, aerodynamically choked inlet, and three-dimensional analytic forebody.
Multi-Core Programming Design Patterns: Stream Processing Algorithms for Dynamic Scene Perceptions
2014-05-01
processor developed by IBM and other companies , incorpo- rates the verb—POWER5— processor as the Power Processor Element (PPE), one of the early general...deliver an power efficient single-precision peak performance of more than 256 GFlops. Substantially more raw power became available later, when nVIDIA ...algorithms, including IBM’s Cell/B.E., GPUs from NVidia and AMD and many-core CPUs from Intel.27 The vast growth of digital video content has been a
Forbes, David; Lockwood, Emma; Elhai, Jon D; Creamer, Mark; O'Donnell, Meaghan; Bryant, Richard; McFarlane, Alexander; Silove, Derrick
2011-07-01
The nature and structure of posttraumatic stress disorder (PTSD) has been the subject of much interest in recent times. This research has been represented by two streams, the first representing a substantive body of work which focuses specifically on the factor structure of PTSD and the second exploring PTSD's relationship with other mood and anxiety disorders. The present study attempted to bring these two streams together by examining structural models of PTSD and their relationship with dimensions underlying other mood and anxiety disorders. PTSD, anxiety and mood disorder data from 989 injury survivors interviewed 3-months following their injury were analyzed using a series of confirmatory factor analyses (CFA) to identify the optimal structural model. CFA analyses indicated that the best fitting model included PTSD's re-experiencing (B1-5), active avoidance (C1-2), and hypervigilance and startle (D4-5) loading onto a Fear factor (represented by panic disorder, agoraphobia and social phobia) and the PTSD dysphoria symptoms (numbing symptoms C3-7 and hyperarousal symptoms D1-3) loading onto an Anxious Misery/Distress factor (represented by depression, generalized anxiety disorder and obsessive compulsive disorder). The findings have implications for informing potential revisions to the structure of the diagnosis of PTSD and the diagnostic algorithm to be applied, with the aim of enhancing diagnostic specificity. Copyright © 2011 Elsevier B.V. All rights reserved.
A nematomorph parasite explains variation in terrestrial subsidies to trout streams in Japan
Sato, Takuya; Watanabe, Katsutoshi; Tokuchi, Naoko; Kamauchi, Hiromitsu; Harada, Yasushi; Lafferty, Kevin D.
2011-01-01
Nematomorph parasites alter the behavior of their orthopteran hosts, driving them to water and creating a source of food for stream salmonids. We investigated whether nematomorphs could explain variation in terrestrial subsidies across several streams. In nine study streams, orthopterans comprise much of the stomach contents of trout (46 +/- 31% on average). Total mass of ingested prey per trout biomass positively correlated with the mass of orthopterans ingested, suggesting that the orthopterans enhanced absolute mass of prey consumption by the trout population. The orthopterans ingested per trout biomass positively correlated with the abundance of nematomorphs in the stream, but not with the abundance of camel crickets (the dominant hosts) around the streams. Streams in conifer plantations had fewer nematomorphs than streams in natural deciduous forests. These results provide the first quantitative evidence that a manipulative parasite can explain variation in the allochthonous energy flow through and across ecosystems.
Subjective comparison and evaluation of speech enhancement algorithms
Hu, Yi; Loizou, Philipos C.
2007-01-01
Making meaningful comparisons between the performance of the various speech enhancement algorithms proposed over the years, has been elusive due to lack of a common speech database, differences in the types of noise used and differences in the testing methodology. To facilitate such comparisons, we report on the development of a noisy speech corpus suitable for evaluation of speech enhancement algorithms. This corpus is subsequently used for the subjective evaluation of 13 speech enhancement methods encompassing four classes of algorithms: spectral subtractive, subspace, statistical-model based and Wiener-type algorithms. The subjective evaluation was performed by Dynastat, Inc. using the ITU-T P.835 methodology designed to evaluate the speech quality along three dimensions: signal distortion, noise distortion and overall quality. This paper reports the results of the subjective tests. PMID:18046463
Grande Ronde Basin Fish Habitat Enhancement Project : 2007 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGowan, Vance R.; Morton, Winston H.
2008-12-30
On July 1, 1984 the Bonneville Power Administration and the Oregon Department of Fish and Wildlife entered into an intergovernmental contract to initiate fish habitat enhancement work in the Joseph Creek subbasin of the Grande Ronde River Basin in northeast Oregon. In 1985 the Upper and Middle Grande Ronde River, and Catherine Creek subbasins were included in the contract, and in 1996 the Wallowa River subbasin was added. The primary goal of 'The Grande Ronde Basin Fish Habitat Enhancement Project' is to create, protect, and restore riparian and instream habitat for anadromous salmonids, thereby maximizing opportunities for natural fish productionmore » within the basin. This project provided for implementation of Program Measure 703 (C)(1), Action Item 4.2 of the Northwest Power Planning Council's Columbia River Basin Fish and Wildlife Program (NPPC, 1987), and continues to be implemented as offsite mitigation for mainstem fishery losses caused by the Columbia River hydro-electric system. All work conducted by the Oregon Department of Fish and Wildlife and partners is on private lands and therefore requires that considerable time be spent developing rapport with landowners to gain acceptance of, and continued cooperation with this program throughout 10-15 year lease periods. Both passive and active restoration treatment techniques are used. Passive regeneration of habitat, using riparian exclosure fencing and alternate water sources are the primary method to restore degraded streams when restoration can be achieved primarily through changes in management. Active restoration techniques using plantings, bioengineering, site-specific instream structures, or whole stream channel alterations are utilized when streams are more severely degraded and not likely to recover in a reasonable timeframe. Individual projects contribute to and complement ecosystem and basin-wide watershed restoration efforts that are underway by state, federal, and tribal agencies, and coordinated by the Grande Ronde Model Watershed Program (Project. No.199202601). Work undertaken during 2007 included: (1) Starting 1 new fencing project in the NFJD subbasin that will protect an additional 1.82 miles of stream and 216.2 acres of habitat; (2) Constructing 0.47 miles of new channel on the Wallowa River to enhance habitat, restore natural channel dimensions, pattern and profile and reconnect approximately 18 acres of floodplain and wetland habitat; (3) Planting 22,100 plants along 3 streams totaling 3.6 stream miles; (4) Establishing 34 new photopoints on 5 projects and retaking 295 existing photopoint pictures; (5) Monitoring stream temperatures at 10 locations on 5 streams and conducting other monitoring activities; (6) Completing riparian fence, water gap and other maintenance on 116.8 miles of project fences; (7) Initiated writing of a comprehensive project summary report that will present a summary of conclusions of the benefits to focal species and management recommendations for the future. Since initiation of this program 56 individual projects have been implemented, monitored and maintained along 84.8 miles of anadromous fish bearing streams that protect and enhance 3,501 acres of riparian and instream habitat.« less
Byl, Thomas Duane; Smith, George F.
1994-01-01
Biomonitoring is often used to enhance or replace chemical monitoring when evaluating water quality in our streams. This report is intended to introduce the theories behind biomonitoring and some of the techniques used in a biomonitoring study. It also lists some of the advantages and limitations of biomonitoring.
Single Pass Streaming BLAST on FPGAs*†
Herbordt, Martin C.; Model, Josh; Sukhwani, Bharat; Gu, Yongfeng; VanCourt, Tom
2008-01-01
Approximate string matching is fundamental to bioinformatics and has been the subject of numerous FPGA acceleration studies. We address issues with respect to FPGA implementations of both BLAST- and dynamic-programming- (DP) based methods. Our primary contribution is a new algorithm for emulating the seeding and extension phases of BLAST. This operates in a single pass through a database at streaming rate, and with no preprocessing other than loading the query string. Moreover, it emulates parameters turned to maximum possible sensitivity with no slowdown. While current DP-based methods also operate at streaming rate, generating results can be cumbersome. We address this with a new structure for data extraction. We present results from several implementations showing order of magnitude acceleration over serial reference code. A simple extension assures compatibility with NCBI BLAST. PMID:19081828
STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Geoffrey; Jha, Shantenu; Ramakrishnan, Lavanya
The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), weremore » conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.« less
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
A PIV Study of Slotted Air Injection for Jet Noise Reduction
NASA Technical Reports Server (NTRS)
Henderson, Brenda S.; Wernet, Mark P.
2012-01-01
Results from acoustic and Particle Image Velocimetry (PIV) measurements are presented for single and dual-stream jets with fluidic injection on the core stream. The fluidic injection nozzles delivered air to the jet through slots on the interior of the nozzle at the nozzle trailing edge. The investigations include subsonic and supersonic jet conditions. Reductions in broadband shock noise and low frequency mixing noise were obtained with the introduction of fluidic injection on single stream jets. Fluidic injection was found to eliminate shock cells, increase jet mixing, and reduce turbulent kinetic energy levels near the end of the potential core. For dual-stream subsonic jets, the introduction of fluidic injection reduced low frequency noise in the peak jet noise direction and enhanced jet mixing. For dual-stream jets with supersonic fan streams and subsonic core streams, the introduction of fluidic injection in the core stream impacted the jet shock cell structure but had little effect on mixing between the core and fan streams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel
Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets . To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets , are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variablesmore » in boosted topologies. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities.« less
Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel; ...
2016-06-01
Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets . To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets , are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variablesmore » in boosted topologies. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities.« less
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
Grande Ronde Basin Fish Habitat Enhancement Project, Annual Report 2002-2003.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGowan, Vance
On July 1, 1984 the Bonneville Power Administration and the Oregon Department of Fish and Wildlife entered into an agreement to initiate fish habitat enhancement work in the Joseph Creek subbasin of the Grande Ronde River Basin in northeast Oregon. In July of 1985 the Upper and Middle Grande Ronde River, and Catherine Creek subbasins were included in the intergovernmental contract, and on March 1, 1996 the Wallowa River subbasin was added. The primary goal of 'The Grande Ronde Basin Fish Habitat Enhancement Project' is to create, protect, and restore riparian and instream habitat for anadromous salmonids, thereby maximizing opportunitiesmore » for natural fish production within the basin. This project provided for implementation of Program Measure 703 (C)(1), Action Item 4.2 of the Northwest Power Planning Council's Columbia River Basin Fish and Wildlife Program (NPPC, 1987), and continues to be implemented as offsite mitigation for mainstem fishery losses caused by the Columbia River hydro-electric system. All work conducted by the Oregon Department of Fish and Wildlife is on private lands and therefore requires that considerable time be spent developing rapport with landowners to gain acceptance of, and continued cooperation with this program throughout 10-15 year lease periods. This project calls for passive regeneration of habitat, using riparian exclosure fencing as the primary method to restore degraded streams to a normative condition. Active remediation techniques using plantings, off-site water developments, site-specific instream structures, or whole channel alterations are also utilized where applicable. Individual projects contribute to and complement ecosystem and basin-wide watershed restoration efforts that are underway by state, federal, and tribal agencies, and local watershed councils. Work undertaken during 2002 included: (1) Implementing 1 new fencing project in the Wallowa subbasin that will protect an additional 0.95 miles of stream and 22.9 acres of habitat; (2) Conducting instream work activities in 3 streams to enhance habitat and/or restore natural channel dimensions, patterns or profiles; (3) Planting 31,733 plants along 3.7 stream miles, (4) Establishing 71 new photopoints and retaking 254 existing photopoint pictures; (5) Monitoring stream temperatures at 12 locations on 6 streams; (6) Completing riparian fence, water gap and other maintenance on 100.5 miles of project fences. Since initiation of the project in 1984 over 68.7 miles of anadromous fish bearing streams and 1,933 acres of habitat have been protected, enhanced and maintained.« less
Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR
NASA Astrophysics Data System (ADS)
Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz,
2015-12-01
P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
Scheduling logic for Miles-In-Trail traffic management
NASA Technical Reports Server (NTRS)
Synnestvedt, Robert G.; Swenson, Harry; Erzberger, Heinz
1995-01-01
This paper presents an algorithm which can be used for scheduling arrival air traffic in an Air Route Traffic Control Center (ARTCC or Center) entering a Terminal Radar Approach Control (TRACON) Facility . The algorithm aids a Traffic Management Coordinator (TMC) in deciding how to restrict traffic while the traffic expected to arrive in the TRACON exceeds the TRACON capacity. The restrictions employed fall under the category of Miles-in-Trail, one of two principal traffic separation techniques used in scheduling arrival traffic . The algorithm calculates aircraft separations for each stream of aircraft destined to the TRACON. The calculations depend upon TRACON characteristics, TMC preferences, and other parameters adapted to the specific needs of scheduling traffic in a Center. Some preliminary results of traffic simulations scheduled by this algorithm are presented, and conclusions are drawn as to the effectiveness of using this algorithm in different traffic scenarios.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Albouy, Philippe; Weiss, Aurélien; Baillet, Sylvain; Zatorre, Robert J
2017-04-05
The implication of the dorsal stream in manipulating auditory information in working memory has been recently established. However, the oscillatory dynamics within this network and its causal relationship with behavior remain undefined. Using simultaneous MEG/EEG, we show that theta oscillations in the dorsal stream predict participants' manipulation abilities during memory retention in a task requiring the comparison of two patterns differing in temporal order. We investigated the causal relationship between brain oscillations and behavior by applying theta-rhythmic TMS combined with EEG over the MEG-identified target (left intraparietal sulcus) during the silent interval between the two stimuli. Rhythmic TMS entrained theta oscillation and boosted participants' accuracy. TMS-induced oscillatory entrainment scaled with behavioral enhancement, and both gains varied with participants' baseline abilities. These effects were not seen for a melody-comparison control task and were not observed for arrhythmic TMS. These data establish theta activity in the dorsal stream as causally related to memory manipulation. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.
A Streaming PCA VLSI Chip for Neural Data Compression.
Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi
2017-12-01
Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
A hybrid data compression approach for online backup service
NASA Astrophysics Data System (ADS)
Wang, Hua; Zhou, Ke; Qin, MingKang
2009-08-01
With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.
NASA Astrophysics Data System (ADS)
Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.
2016-05-01
Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.
NASA Astrophysics Data System (ADS)
Dou, P.
2017-12-01
Guangzhou has experienced a rapid urbanization period called "small change in three years and big change in five years" since the reform of China, resulting in significant land use/cover changes(LUC). To overcome the disadvantages of single classifier for remote sensing image classification accuracy, a multiple classifier system (MCS) is proposed to improve the quality of remote sensing image classification. The new method combines advantages of different learning algorithms, and achieves higher accuracy (88.12%) than any single classifier did. With the proposed MCS, land use/cover (LUC) on Landsat images from 1987 to 2015 was obtained, and the LUCs were used on three watersheds (Shijing river, Chebei stream, and Shahe stream) to estimate the impact of urbanization on water flood. The results show that with the high accuracy LUC, the uncertainty in flood simulations are reduced effectively (for Shijing river, Chebei stream, and Shahe stream, the uncertainty reduced 15.5%, 17.3% and 19.8% respectively).
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemmens, W.B.; Koupal, J.W.; Sabourin, M.A.
1993-07-20
Apparatus is described for detecting motor vehicle exhaust gas catalytic converter deterioration comprising a first exhaust gas oxygen sensor adapted for communication with an exhaust stream before passage of the exhaust stream through a catalytic converter and a second exhaust gas oxygen sensor adapted for communication with the exhaust stream after passage of the exhaust stream through the catalytic converter, an on-board vehicle computational means, said computational means adapted to accept oxygen content signals from the before and after catalytic converter oxygen sensors and adapted to generate signal threshold values, said computational means adapted to compare over repeated time intervalsmore » the oxygen content signals to the signal threshold values and to store the output of the compared oxygen content signals, and in response after a specified number of time intervals for a specified mode of motor vehicle operation to determine and indicate a level of catalyst deterioration.« less
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.
NASA Astrophysics Data System (ADS)
Beskow, Samuel; de Mello, Carlos Rogério; Vargas, Marcelle M.; Corrêa, Leonardo de L.; Caldeira, Tamara L.; Durães, Matheus F.; de Aguiar, Marilton S.
2016-10-01
Information on stream flows is essential for water resources management. The stream flow that is equaled or exceeded 90% of the time (Q90) is one the most used low stream flow indicators in many countries, and its determination is made from the frequency analysis of stream flows considering a historical series. However, stream flow gauging network is generally not spatially sufficient to meet the necessary demands of technicians, thus the most plausible alternative is the use of hydrological regionalization. The objective of this study was to couple the artificial intelligence techniques (AI) K-means, Partitioning Around Medoids (PAM), K-harmonic means (KHM), Fuzzy C-means (FCM) and Genetic K-means (GKA), with measures of low stream flow seasonality, for verification of its potential to delineate hydrologically homogeneous regions for the regionalization of Q90. For the performance analysis of the proposed methodology, location attributes from 108 watersheds situated in southern Brazil, and attributes associated with their seasonality of low stream flows were considered in this study. It was concluded that: (i) AI techniques have the potential to delineate hydrologically homogeneous regions in the context of Q90 in the study region, especially the FCM method based on fuzzy logic, and GKA, based on genetic algorithms; (ii) the attributes related to seasonality of low stream flows added important information that increased the accuracy of the grouping; and (iii) the adjusted mathematical models have excellent performance and can be used to estimate Q90 in locations lacking monitoring.
Milky Way mass and potential recovery using tidal streams in a realistic halo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.
2014-11-01
We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potentialmore » parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.« less
Stream Tracker: Crowd sourcing and remote sensing to monitor stream flow intermittence
NASA Astrophysics Data System (ADS)
Puntenney, K.; Kampf, S. K.; Newman, G.; Lefsky, M. A.; Weber, R.; Gerlich, J.
2017-12-01
Streams that do not flow continuously in time and space support diverse aquatic life and can be critical contributors to downstream water supply. However, these intermittent streams are rarely monitored and poorly mapped. Stream Tracker is a community powered stream monitoring project that pairs citizen contributed observations of streamflow presence or absence with a network of streamflow sensors and remotely sensed data from satellites to track when and where water is flowing in intermittent stream channels. Citizens can visit sites on roads and trails to track flow and contribute their observations to the project site hosted by CitSci.org. Data can be entered using either a mobile application with offline capabilities or an online data entry portal. The sensor network provides a consistent record of streamflow and flow presence/absence across a range of elevations and drainage areas. Capacitance, resistance, and laser sensors have been deployed to determine the most reliable, low cost sensor that could be mass distributed to track streamflow intermittence over a larger number of sites. Streamflow presence or absence observations from the citizen and sensor networks are then compared to satellite imagery to improve flow detection algorithms using remotely sensed data from Landsat. In the first two months of this project, 1,287 observations have been made at 241 sites by 24 project members across northern and western Colorado.
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huntington, Charles W.
If implemented, the Orofino Creek Passage Project will provide adult fish passage at barrier waterfalls on Orofino Creek, Idaho, and give anadromous salmonids access to upstream habitat. Anadromous fish are currently blocked at Orofino Falls, 8.3 km above the stream's confluence with the Clearwater River. This report summarizes results of a study to determine the potential for increasing natural production of summer steelhead (Salmo gairdneri) and spring chinook salmon (Oncorhynchus tschawytscha) in the Orofino Creek drainage by enhancing adult fish passage. Data on fish habitat, migration barriers, stream temperatures and fish populations in the drainage were collected during 1987 andmore » provided a basis for estimating the potential for self-sustaining anadromous salmonid production above Orofino Falls. Between 84.7 and 103.6 km of currently inaccessible streams would be available to anadromous fish following project implementation, depending on the level of passage enhancement above Orofino Falls. These streams contain habitat of poor to good quality for anadromous salmonids. Low summer flows and high water temperatures reduce habitat quality in lower mainstem Orofino Creek. Several streams in the upper watershed have habitat that is dominated by brook trout and may be poorly utilized by steelhead or salmon. 32 refs., 20 figs., 22 tabs.« less
Acoustic Streaming and Heat and Mass Transfer Enhancement
NASA Technical Reports Server (NTRS)
Trinh, E. H.; Gopinath, A.
1996-01-01
A second order effect associated with high intensity sound field, acoustic streaming has been historically investigated to gain a fundamental understanding of its controlling mechanisms and to apply it to practical aspects of heat and mass transfer enhancement. The objectives of this new research project are to utilize a unique experimental technique implementing ultrasonic standing waves in closed cavities to study the details of the generation of the steady-state convective streaming flows and of their interaction with the boundary of ultrasonically levitated near-spherical solid objects. The goals are to further extend the existing theoretical studies of streaming flows and sample interactions to higher streaming Reynolds number values, for larger sample size relative to the wavelength, and for a Prandtl and Nusselt numbers parameter range characteristic of both gaseous and liquid host media. Experimental studies will be conducted in support to the theoretical developments, and the crucial impact of microgravity will be to allow the neglect of natural thermal buoyancy. The direct application to heat and mass transfer in the absence of gravity will be emphasized in order to investigate a space-based experiment, but both existing and novel ground-based scientific and technological relevance will also be pursued.
Real-time detection and classification of anomalous events in streaming data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.
2016-04-19
A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.
Method for processing aqueous wastes
Pickett, John B.; Martin, Hollis L.; Langton, Christine A.; Harley, Willie W.
1993-01-01
A method for treating waste water such as that from an industrial processing facility comprising the separation of the waste water into a dilute waste stream and a concentrated waste stream. The concentrated waste stream is treated chemically to enhance precipitation and then allowed to separate into a sludge and a supernate. The supernate is skimmed or filtered from the sludge and blended with the dilute waste stream to form a second dilute waste stream. The sludge remaining is mixed with cementitious material, rinsed to dissolve soluble components, then pressed to remove excess water and dissolved solids before being allowed to cure. The dilute waste stream is also chemically treated to decompose carbonate complexes and metal ions and then mixed with cationic polymer to cause the precipitated solids to flocculate. Filtration of the flocculant removes sufficient solids to allow the waste water to be discharged to the surface of a stream. The filtered material is added to the sludge of the concentrated waste stream. The method is also applicable to the treatment and removal of soluble uranium from aqueous streams, such that the treated stream may be used as a potable water supply.
NASA Astrophysics Data System (ADS)
Wu, T.; Li, T.; Li, J.; Wang, G.
2017-12-01
Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.
Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time
Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721
Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.
Detection of Abnormal Events via Optical Flow Feature Analysis
Wang, Tian; Snoussi, Hichem
2015-01-01
In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm. PMID:25811227
A new method of real-time detection of changes in periodic data stream
NASA Astrophysics Data System (ADS)
Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei
2017-07-01
The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.
A globally convergent MC algorithm with an adaptive learning rate.
Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian
2012-02-01
This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
Optimal erasure protection for scalably compressed video streams with limited retransmission.
Taubman, David; Thie, Johnson
2005-08-01
This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
Streams of the agricultural Midwest export large quantities of nitrogen, which impairs downstream water quality, most notably in the Gulf of Mexico. The two-stage ditch is a novel restoration practice, in which floodplains are constructed alongside channelized ditches. During hi...
In traditional pervaporation systems, the permeate vapor is completely condensed to obtain a liquid permeate stream. For example, in the recovery of ethanol from a 5-wt% aqueous stream (such as a biomass fermentation broth), the permeate from a silicone rubber pervaporation membr...
Reconstructing the Dwarf Galaxy Progenitor from Tidal Streams Using MilkyWay@home
NASA Astrophysics Data System (ADS)
Newberg, Heidi; Shelton, Siddhartha
2018-04-01
We attempt to reconstruct the mass and radial profile of stars and dark matter in the dwarf galaxy progenitor of the Orphan Stream, using only information from the stars in the Orphan Stream. We show that given perfect data and perfect knowledge of the dwarf galaxy profile and Milky Way potential, we are able to reconstruct the mass and radial profiles of both the stars and dark matter in the progenitor to high accuracy using only the density of stars along the stream and either the velocity dispersion or width of the stream in the sky. To perform this test, we simulated the tidal disruption of a two component (stars and dark matter) dwarf galaxy along the orbit of the Orphan Stream. We then created a histogram of the density of stars along the stream and a histogram of either the velocity dispersion or width of the stream in the sky as a function of position along the stream. The volunteer supercomputer MilkyWay@home was given these two histograms, the Milky Way potential model, and the orbital parameters for the progenitor. N-body simulations were run, varying dwarf galaxy parameters and the time of disruption. The goodness-of-fit of the model to the data was determined using an Earth-Mover Distance algorithm. The parameters were optimized using Differential Evolution. Future work will explore whether currently available information on the Orphan Stream stars is sufficient to constrain its progenitor, and how sensitive the optimization is to our knowledge of the Milky Way potential and the density model of the dwarf galaxy progenitor, as well as a host of other real-life unknowns.
New metrics for evaluating channel networks extracted in grid digital elevation models
NASA Astrophysics Data System (ADS)
Orlandini, S.; Moretti, G.
2017-12-01
Channel networks are critical components of drainage basins and delta regions. Despite the important role played by these systems in hydrology and geomorphology, there are at present no well-defined methods to evaluate numerically how two complex channel networks are geometrically far apart. The present study introduces new metrics for evaluating numerically channel networks extracted in grid digital elevation models with respect to a reference channel network (see the figure below). Streams of the evaluated network (EN) are delineated as in the Horton ordering system and examined through a priority climbing algorithm based on the triple index (ID1,ID2,ID3), where ID1 is a stream identifier that increases as the elevation of lower end of the stream increases, ID2 indicates the ID1 of the draining stream, and ID3 is the ID1 of the corresponding stream in the reference network (RN). Streams of the RN are identified by the double index (ID1,ID2). Streams of the EN are processed in the order of increasing ID1 (plots a-l in the figure below). For each processed stream of the EN, the closest stream of the RN is sought by considering all the streams of the RN sharing the same ID2. This ID2 in the RN is equal in the EN to the ID3 of the stream draining the processed stream, the one having ID1 equal to the ID2 of the processed stream. The mean stream planar distance (MSPD) and the mean stream elevation drop (MSED) are computed as the mean distance and drop, respectively, between corresponding streams. The MSPD is shown to be useful for evaluating slope direction methods and thresholds for channel initiation, whereas the MSED is shown to indicate the ability of grid coarsening strategies to retain the profiles of observed channels. The developed metrics fill a gap in the existing literature by allowing hydrologists and geomorphologists to compare descriptions of a fixed physical system obtained by using different terrain analysis methods, or different physical systems described by using the same methods.
NASA Astrophysics Data System (ADS)
Martin, C.
2017-12-01
Topography can be used to delineate streams and quantify the topographic control on hydrological processes of a watershed because geomorphologic processes have shaped the topography and streams of a catchment over time. Topographic Wetness index (TWI) is a common index used for delineating stream networks by predicting location of saturation excess overland flow, but is also used for other physical attributes of a watershed such as soil moisture, groundwater level, and vegetation patterns. This study evaluates how well TWI works across an elevation gradient and the relationships between the active drainage network of four headwater watersheds at various elevations in the Colorado Front Range to topography, geology, climate, soils, elevation, and vegetation in attempt to determine the controls on streamflow location and duration. The results suggest that streams prefer to flow along a path of least resistance which including faults and permeable lithology. Permeable lithologies created more connectivity of stream networks during higher flows but during lower flows dried up. Streams flowing over impermeable lithologies had longer flow duration. Upslope soil hydraulic conductivity played a role on stream location, where soils with low hydraulic conductivity had longer flow duration than soils with higher hydraulic conductivity.Finally TWI thresholds ranged from 5.95 - 10.3 due to changes in stream length and to factors such as geology and soil. TWI had low accuracy for the lowest elevation site due to the greatest change of stream length. In conclusion, structural geology, upslope soil texture, and the permeability of the underlying lithology influenced where the stream was flowing and for how long. Elevation determines climate which influences the hydrologic processes occurring at the watersheds and therefore affects the duration and timing of streams at different elevations. TWI is an adequate tool for delineating streams because results suggest topography has a primary control on the stream locations, but because intermittent streams change throughout the year a algorithm needs to be created to correspond to snow melt and rain events. Also geology indices and soil indices need be considered in addition to topography to have the most accurate derived stream network.
Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Excep...
Experimental and numerical investigation of Acoustic streaming (Eckart streaming)
NASA Astrophysics Data System (ADS)
Dridi, Walid; Botton, Valery; Henry, Daniel; Ben Hadid, Hamda
The application of sound waves in the bulk of a fluid can generate steady or quasi-steady flows reffered to as Acoustic streaming flows. We can distinguish two kind of acoustic streaming: The Rayleigh Streaming is generated when a standing acoustic waves interfere with solid walls to give birth to an acoustic boundary layer. Steady recirculations are then driven out of the boundary layer and can be used in micro-gravity, where the free convection is too weak or absent, to enhance the convective heat or mass transfer and cooling the electronic devises [1]. The second kind is the Eckart streaming, which is a flow generated far from the solid boundaries, it can be used to mix a chemical solutions [2], and to drive a viscous liquids in channels [3-4], in micro-gravity area. Our study focuses on the Eckart streaming configuration, which is investigated both numerical and experimental means. The experimental configuration is restricted to the case of a cylindrical non-heated cavity full of water or of a water+glycerol mixture. At the middle of one side of the cavity, a plane ultrasonic transducer generates a 2MHz wave; an absorber is set at the opposite side of the cavity to avoid any reflections. The velocity field is measured with a standard PIV system. [1] P. Vainshtein, M. Fichman and C. Gutfinger, "Acoustic enhancement of heat transfer between two parallel plates", International Journal of Heat and Mass Transfert, 1995, 38(10), 1893. [2] C. Suri, K. Tekenaka, H. Yanagida, Y. Kojima and K. Koyama, "Chaotic mixing generated by acoustic streaming", Ultrasonics, 2002, 40, 393 [3] O.V. Rudenko and A.A. Sukhorukov, "Nonstationnary Eckart streaming and pumping of liquid in ultrasonic field", Acoustical Physics, 1998, 44, 653. [4] Kenneth D. Frampton, Shawn E. Martin and Keith Minor, "The scaling of acoustic streaming for application in micro-fluidic devices", Applied Acoustics, 2003, 64,681
John Day River Sub-Basin Fish Habitat Enhancement Project; 2008 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Russ M.; Alley, Pamela D.; Goin Jr, Lonnie
Work undertaken in 2008 included: (1) Seven new fence projects were completed thereby protecting approximately 10.97 miles of streams with 16.34 miles of riparian fence; (2) Renewal of one expired lease was completed thereby continuing to protect 0.75 miles of stream with 1.0 mile of riparian fence. (3) Maintenance of all active project fences (106.54 miles), watergaps (78), spring developments (33) were checked and repairs performed; (3) Planted 1000 willow/red osier on Fox Creek/Henslee property; (4) Planted 2000 willows/red osier on Middle Fork John Day River/Coleman property; (5) Planted 1000 willow/red osier cuttings on Fox Creek/Johns property; (6) Since themore » initiation of the Fish Habitat Project in 1984 we have 126.86 miles of stream protected using 211.72 miles of fence protecting 5658 acres. The purpose of the John Day Fish Habitat Enhancement Program is to enhance production of indigenous wild stocks of spring Chinook and summer steelhead within the sub basin through habitat protection, enhancement and fish passage improvement. The John Day River system supports the largest remaining wild runs of spring chinook salmon and summer steelhead in Northeast Oregon.« less
Guided filtering for solar image/video processing
NASA Astrophysics Data System (ADS)
Xu, Long; Yan, Yihua; Cheng, Jun
2017-06-01
A new image enhancement algorithm employing guided filtering is proposed in this work for the enhancement of solar images and videos so that users can easily figure out important fine structures embedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
Tracking the global jet streams through objective analysis
NASA Astrophysics Data System (ADS)
Gallego, D.; Peña-Ortiz, C.; Ribera, P.
2009-12-01
Although the tropospheric jet streams are probably the more important single dynamical systems in the troposphere, their study at climatic scale has been usually troubled by the difficulty of characterising their structure. During the last years, a deal of effort has been made in order to construct long-term scale objective climatologies of the jet stream or at least to understand the variability of the westerly flux in the upper troposphere. A main problem with studying the jets is the necessity of using highly derivated fields as the potential vorticity or even the analysis of chemical tracers. Despite their utility, these approaches are very problematic to construct an automatic searching algorithm because of the difficulty of defining criteria for these extremely noisy fields. Some attempts have been addressed trying to use only the wind field to find the jet. This direct approach avoids the use of derivate variables, but it must contain some stringent criteria to filter the large number of tropospheric wind maxima not related to the jet currents. This approach has offered interesting results for the relatively simple structure of the Southern Hemisphere tropospheric jets (Gallego et al. Clim. Dyn, 2005). However, the much more complicated structure of its northern counterpart has resisted the analysis with the same degree of detail by using the wind alone. In this work we present a new methodology able to characterise the position, strength and altitude of the jet stream at global scale on a daily basis. The method is based on the analysis of the 3-D wind field alone and it searches, at each longitude, relative wind maxima in the upper troposphere between the levels of 400 and 100 hPa. An ad-hoc defined density function (dependent on the season and the longitude) of the detection positions is used as criteria to filter spurious wind maxima not related to the jet. The algorithm has been applied to the NCEP/NCAR reanalysis and the results show that the basic problems of a detection algorithm focused on searching the jets are avoided. Thus, a clear separation between the subtropical and polar jets for both hemispheres is found. The meandering of the northern hemisphere polar jet is accurately characterised while the large annual cycle in the strength of the subtropical jet is clearly found. In addition, the algorithm has shown to be able of finding structures for which it was not originally intended, as the tropical easterly jet stream above Southeast Asia, India and Africa. The new method opens some new possibilities to the study of the upper level tropospheric circulation. So the temporal variability of each jet on a daily basis, the single or double jet structures through a seasonal cycle or the trends of multiple jet characteristics (strength, location, height, wavenumber, separation between jets, etc.) can be easily computed to construct a new jet climatology.
A Field Guide to Kentucky Rivers and Streams.
ERIC Educational Resources Information Center
Kentucky State Div. of Water, Frankfort. Kentucky Natural Resources and Environmental Protection Cabinet.
This field guide was especially developed for Water Watch--a public participation program in Kentucky that encourages citizens to adopt a stream, lake or wetland, and then gain hands-on experience in protecting and enhancing their adopted water resources. Understanding the relationships between life and the environment helps people to appreciate…
Lidar-based door and stair detection from a mobile robot
NASA Astrophysics Data System (ADS)
Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet
2010-04-01
We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.
NASA Astrophysics Data System (ADS)
Kumar, Alla S.; Clark, Joseph; Beyette, Fred R., Jr.
2009-02-01
Neonatal jaundice is a medical condition which occurs in newborns as a result of an imbalance between the production and elimination of bilirubin. The excess bilirubin in the blood stream diffuses into the surrounding tissue leading to a yellowing of the skin. As the bilirubin levels rise in the blood stream, there is a continuous exchange between the extra vascular bilirubin and bilirubin in the blood stream. Exposure to phototherapy alters the concentration of bilirubin in the vascular and extra vascular regions by causing bilirubin in the skin layers to be broken down. Thus, the relative concentration of extra vascular bilirubin is reduced leading to a diffusion of bilirubin out of the vascular region. Diffuse reflectance spectra from human skin contains physiological and structural information of the skin and nearby tissue. A diffuse reflectance spectrum must be captured before and after blanching in order to isolate the intravascular and extra vascular bilirubin. A new mathematical model is proposed with extra vascular bilirubin concentration taken into consideration along with other optical parameters in defining the diffuse reflectance spectrum from human skin. A nonlinear optimization algorithm has been adopted to extract the optical properties (including bilirubin concentration) from the skin reflectance spectrum. The new system model and nonlinear algorithm have been combined to enable extraction of Bilirubin concentrations within an average error of 10%.
Mapping tree and impervious cover using Ikonos imagery: links with water quality and stream health
NASA Astrophysics Data System (ADS)
Wright, R.; Goetz, S. J.; Smith, A.; Zinecker, E.
2002-12-01
Precision georeferened Ikonos satellite imagery was used to map tree cover and impervious surface area in Montgomery county Maryland. The derived maps were used to assess riparian zone stream buffer tree cover and to predict, with multivariate logistic regression, stream health ratings across 246 small watersheds averaging 472 km2 in size. Stream health was assessed by state and county experts using a combination of physical measurements (e.g., dissolved oxygen) and biological indicators (e.g., benthic macroinvertebrates). We found it possible to create highly accurate (90+ per cent) maps of tree and impervious cover using decision tree classifiers, provided extensive field data were available for algorithm training. Impervious surface area was found to be the primary predictor of stream health, followed by tree cover in riparian buffers, and total tree cover within entire watersheds. A number of issues associated with mapping using Ikonos imagery were encountered, including differences in phenological and atmospheric conditions, shadowing within canopies and between scene elements, and limited spectral discrimination of cover types. We report on both the capabilities and limitations of Ikonos imagery for these applications, and considerations for extending these analyses to other areas.
North, Frederick; Varkey, Prathiba; Caraballo, Pedro; Vsetecka, Darlene; Bartel, Greg
2007-10-11
Complex decision support software can require significant effort in maintenance and enhancement. A quality improvement tool, the prioritization matrix, was successfully used to guide software enhancement of algorithms in a symptom assessment call center.
Numerical simulation of three-dimensional transonic turbulent projectile aerodynamics by TVD schemes
NASA Technical Reports Server (NTRS)
Shiau, Nae-Haur; Hsu, Chen-Chi; Chyu, Wei-Jao
1989-01-01
The two-dimensional symmetric TVD scheme proposed by Yee has been extended to and investigated for three-dimensional thin-layer Navier-Stokes simulation of complex aerodynamic problems. An existing three-dimensional Navier-stokes code based on the beam and warming algorithm is modified to provide an option of using the TVD algorithm and the flow problem considered is a transonic turbulent flow past a projectile with sting at ten-degree angle of attack. Numerical experiments conducted for three flow cases, free-stream Mach numbers of 0.91, 0.96 and 1.20 show that the symmetric TVD algorithm can provide surface pressure distribution in excellent agreement with measured data; moreover, the rate of convergence to attain a steady state solution is about two times faster than the original beam and warming algorithm.
Coarse Particulate Organic Matter: Storage, Transport, and Retention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiegs, Scott; Lamberti, Gary A.; Entrekin, Sally A.
2017-08-01
Coarse particulate organic matter, or CPOM, is a basal energy and nutrient resource in many stream ecosystems and is provided by inputs from the riparian zone, incoming tributaries, and to a lesser extent from in-stream production. The ability of a stream to retain CPOM or slow its transport is critical to its consumption and assimilation by stream biota. In this chapter, we describe basic exercises to measure (1) the amount of CPOM in the streambed and (2) the retention of CPOM from standardized particle releases. We further describe advanced exercises that (1) experimentally enhance the retentiveness of a stream reachmore » and (2) measure organic carbon transport and turnover (i.e., spiraling) in the channel.« less
Coarse Particulate Organic Matter: Storage, Transport, and Retention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiegs, Scott; Lamberti, Gary A.; Entrekin, Sally A.
Coarse particulate organic matter, or CPOM, is a basal energy and nutrient resource in many stream ecosystems and is provided by inputs from the riparian zone, incoming tributaries, and to a lesser extent from in-stream production. The ability of a stream to retain CPOM or slow its transport is critical to its consumption and assimilation by stream biota. In this chapter, we describe basic exercises to measure (1) the amount of CPOM in the streambed and (2) the retention of CPOM from standardized particle releases. We further describe advanced exercises that (1) experimentally enhance the retentiveness of a stream reachmore » and (2) measure organic carbon transport and turnover (i.e., spiraling) in the channel.« less
Cannibalization and Rebirth in the NGC 5387 System. I. The Stellar Stream and Star-forming Region
NASA Astrophysics Data System (ADS)
Beaton, Rachael L.; Martínez-Delgado, David; Majewski, Steven R.; D'Onghia, Elena; Zibetti, Stefano; Gabany, R. Jay; Johnson, Kelsey E.; Blanton, Michael; Verbiscer, Anne
2014-08-01
We have identified a low surface brightness stellar stream from visual inspection of Sloan Digital Sky Survey (SDSS) imaging for the edge-on, spiral galaxy NGC 5387. An optically blue overdensity coincident with the stream intersection with the NGC 5387 disk was also identified in SDSS and in the Galaxy Evolution Explorer Deep Imaging Survey contributing 38% of the total far-UV integrated flux from NGC 5387. Deeper optical imaging was acquired with the Vatican Advanced Technology Telescope that confirmed the presence of both features. The stellar stream is red in color, (B - V) = 0.7, has a stellar mass of 6 × 108 M ⊙, which implies a 1:50 merger ratio, has a circular radius, R circ ~ 11.7 kpc, formed in ~240 Myr, and the progenitor had a total mass of ~4 × 1010 M ⊙. Spectroscopy from LBT+MODS1 was used to determine that the blue overdensity is at the same redshift as NGC 5387, consists of young stellar populations (~10 Myr), is metal-poor (12 + log (O/H) = 8.03), and is forming stars at an enhanced rate (~1-3 M ⊙ yr-1). The most likely interpretations are that the blue overdensity is (1) a region of enhanced star formation in the outer disk of NGC 5387 induced by the minor accretion event or (2) the progenitor of the stellar stream experiencing enhanced star formation. Additional exploration of these scenarios is presented in a companion paper. Based on observations with the VATT: the Alice P. Lennon Telescope and the Thomas J. Bannan Astrophysics Facility.
NASA Astrophysics Data System (ADS)
Chung, N.; Suberkopp, K.
2005-05-01
The effect of shredder feeding on aquatic hyphomycete communities associated with submerged leaves was studied in two southern Appalachian headwater streams in North Carolina. Coarse and fine mesh litter bags containing red maple (Acer rubrum) leaves were placed in the nutrient-enriched stream and in the reference stream and were retrieved monthly. Both shredder feeding and nutrient enrichment enhanced breakdown rates. The breakdown rates of leaves in coarse mesh bags in the reference stream (k = 0.0275) and fine mesh bags in the nutrient enriched stream (k = 0.0272) were not significantly different, suggesting that the shredding effect on litter breakdown was offset by higher fungal activity as a result of nutrient enrichment. Fungal sporulation rates and biomass (based on ergosterol concentrations) were higher in the nutrient enriched than in the reference stream, but neither fungal biomass nor sporulation rate was affected by shredder feeding. Species richness was higher in the nutrient-enriched than in the reference stream. The enrichment with nutrients altered fungal community composition more than shredder feeding.
A technique is presented for finding the least squares estimates for the ultimate biochemical oxygen demand (BOD) and rate coefficient for the BOD reaction without resorting to complicated computer algorithms or subjective graphical methods. This may be used in stream water quali...
NASA Technical Reports Server (NTRS)
Ioup, G. E.; Ioup, J. W.
1985-01-01
Appendix 4 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer discusses coordinate axes, location of origin, and redundancy for the one- and two-dimensional Fourier transform for complex and real data.
Traffic Analysis for Network Security using Learning Theory and Streaming Algorithms
2008-09-01
to have had friends who have immensely improved my research and presentation – David Brumley, Hubert Chan, Elena Nabieva, Vyas Sekar, and Runting Shi...Information Assurance and Security 2001, 2001. [15] Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. Can machine learning be
Speech enhancement based on modified phase-opponency detectors
NASA Astrophysics Data System (ADS)
Deshmukh, Om D.; Espy-Wilson, Carol Y.
2005-09-01
A speech enhancement algorithm based on a neural model was presented by Deshmukh et al., [149th meeting of the Acoustical Society America, 2005]. The algorithm consists of a bank of Modified Phase Opponency (MPO) filter pairs tuned to different center frequencies. This algorithm is able to enhance salient spectral features in speech signals even at low signal-to-noise ratios. However, the algorithm introduces musical noise and sometimes misses a spectral peak that is close in frequency to a stronger spectral peak. Refinement in the design of the MPO filters was recently made that takes advantage of the falling spectrum of the speech signal in sonorant regions. The modified set of filters leads to better separation of the noise and speech signals, and more accurate enhancement of spectral peaks. The improvements also lead to a significant reduction in musical noise. Continuity algorithms based on the properties of speech signals are used to further reduce the musical noise effect. The efficiency of the proposed method in enhancing the speech signal when the level of the background noise is fluctuating will be demonstrated. The performance of the improved speech enhancement method will be compared with various spectral subtraction-based methods. [Work supported by NSF BCS0236707.
Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.
Xia, Youshen; Wang, Jun
2015-07-01
This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Umatilla River Basin Anadromous Fish Habitat Enhancement Project : 2001 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, R. Todd; Sexton, Amy D.
The Umatilla River Basin Anadromous Fish Habitat Enhancement Project continued to identify impacted stream reaches throughout the Umatilla River Basin for habitat improvements during the 2001 project period. Public outreach efforts, biological and physical monitoring, and continued development of a Umatilla Subbasin Watershed Assessment assisted the project in fostering public cooperation, targeting habitat deficiencies and determining habitat recovery measures. Projects continued to be maintained on 49 private properties, one 25-year Non-Exclusive Bureau of Indian Affairs' Easement was secured, six new projects implemented and two existing project areas improved to enhance anadromous fish habitat. New project locations included sites on themore » mid Umatilla River, upper Umatilla River, Mission Creek, Cottonwood Creek and Buckaroo Creek. New enhancements included: (1) construction of 11,264 feet of fencing between River Mile 43.0 and 46.5 on the Umatilla River, (2) a stream bank stabilization project implemented at approximately River Mile 63.5 Umatilla River to stabilize 330 feet of eroding stream bank and improve instream habitat diversity, included construction of eight root wad revetments and three boulder J-vanes, (3) drilling a 358-foot well for off-stream livestock watering at approximately River Mile 46.0 Umatilla River, (4) installing a 50-foot bottomless arch replacement culvert at approximately River Mile 3.0 Mission Creek, (5) installing a Geoweb stream ford crossing on Mission Creek (6) installing a 22-foot bottomless arch culvert at approximately River Mile 0.5 Cottonwood Creek, and (7) providing fence materials for construction of 21,300 feet of livestock exclusion fencing in the Buckaroo Creek Drainage. An approximate total of 3,800 native willow cuttings and 350 pounds of native grass seed was planted at new upper Umatilla River, Mission Creek and Cottonwood Creek project sites. Habitat improvements implemented at existing project sites included development of a 105-foot well for off-stream livestock watering at approximately River Mile 12.0 Wildhorse Creek and construction of an engineered stream ford at approximately River Mile 3.0 Mission Creek. A total of $277,848 in financial cost share assistance was provided by the Confederated Tribes of the Umatilla Indian Reservation, U.S. Bureau of Indian Affairs, U.S. Environmental Protection Agency, U.S. Department of Agriculture, National Oceanic and Atmospheric Administration, U.S. Workforce Investment Act, Oregon Watershed Enhancement Board, Umatilla County and Pheasants Forever for planning efforts and habitat enhancements. Monitoring continued to quantify baseline conditions and the effects of habitat enhancements in the upper basin. Daily stream temperatures were collected from June through September at 22 sites. Suspended sediment samples were obtained at three gage stations to arrive at daily sediment load estimates. Photographs were taken at 96 existing and three newly established photo points to document habitat recovery and pre-project conditions. Transects were measured at three stream channel cross sections to assist with engineering and design and to obtain baseline data regarding channel morphology. Biological inventories were conducted at River Mile 3.0 Mission Creek to determine pre-project fish utilization above and below the passage barrier. Post-project inventories were also conducted at River Mile 85.0 of the Umatilla River at a project site completed in 1999. Umatilla Subbasin Watershed Assessment efforts were continued under a subcontract with Eco-Pacific. This watershed assessment document and working databases will be completed in fiscal year 2002 and made available to assist project personnel with sub-watershed prioritization of habitat needs. Water Works Consulting, Duck Creek Associates and Ed Salminen Consulting were subcontracted for watershed assessment and restoration planning in the Meacham Creek Subwatershed. A document detailing current conditions in the Meacham Creek Subwatershed and necessary restoration actions will be available for review in 2003.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graves, Suzy
Ranking criteria were developed to rate 19 tributaries on the Coeur d`Alene Indiana Reservation for potential of habitat enhancement for westslope cutthroat trout, Oncorhynchus clarki lewisi, and bull trout, Salvelinus malma. Cutthroat and bull trout habitat requirements, derived from an extensive literature review of each species, were compared to the physical and biological parameters of each stream observed during an aerial -- helicopter survey. Ten tributaries were selected for further study, using the ranking criteria that were derived. The most favorable ratings were awarded to streams that were located completely on the reservation, displayed highest potential for improvement and enhancement,more » had no barriers to fish migration, good road access, and a gradient acceptable to cutthroat and bull trout habitation. The ten streams selected for study were Bellgrove, Fighting, Lake, Squaw, Plummer, Little Plummer, Benewah, Alder, Hell`s Gulch and Evans creeks.« less
Thermochemical Wastewater Valorization via Enhanced Microbial Toxicity Tolerance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckham, Gregg T; Thelhawadigedara, Lahiru Niroshan Jayakody; Johnson, Christopher W
Thermochemical (TC) biomass conversion processes such as pyrolysis and liquefaction generate considerable amounts of wastewater, which often contains highly toxic compounds that are incredibly challenging to convert via standard wastewater treatment approaches such as anaerobic digestion. These streams represent a cost for TC biorefineries, and a potential valorization opportunity, if effective conversion methods are developed. The primary challenge hindering microbial conversion of TC wastewater is toxicity. In this study, we employ a robust bacterium, Pseudomonas putida, with TC wastewater streams to demonstrate that aldehydes are the most inhibitory compounds in these streams. Proteomics, transcriptomics, and fluorescence-based immunoassays of P. putidamore » grown in a representative wastewater stream indicate that stress results from protein damage, which we hypothesize is a primary toxicity mechanism. Constitutive overexpression of the chaperone genes, groEL, groES, and clpB, in a genome-reduced P. putida strain improves the tolerance towards multiple TC wastewater samples up to 200-fold. Moreover, the concentration ranges of TC wastewater are industrially relevant for further bioprocess development for all wastewater streams examined here, representing different TC process configurations. Furthermore, we demonstrate proof-of-concept polyhydroxyalkanoate production from the usable carbon in an exemplary TC wastewater stream. Overall, this study demonstrates that protein quality control machinery and repair mechanisms can enable substantial gains in microbial tolerance to highly toxic substrates, including heterogeneous waste streams. When coupled to other metabolic engineering advances such as expanded substrate utilization and enhanced product accumulation, this study generally enables new strategies for biological conversion of highly-toxic, organic-rich wastewater via engineered aerobic monocultures or designer consortia.« less
Implications of a quadratic stream definition in radiative transfer theory.
NASA Technical Reports Server (NTRS)
Whitney, C.
1972-01-01
An explicit definition of the radiation-stream concept is stated and applied to approximate the integro-differential equation of radiative transfer with a set of twelve coupled differential equations. Computational efficiency is enhanced by distributing the corresponding streams in three-dimensional space in a totally symmetric way. Polarization is then incorporated in this model. A computer program based on the model is briefly compared with a Monte Carlo program for simulation of horizon scans of the earth's atmosphere. It is found to be considerably faster.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-27
... Methodology is an enhancement to the SPAN for the ICE Margining algorithm employed to calculate Original... Margining algorithm employed to calculate Original Margin and was designed to optimize and improve margin... framework algorithm. The enhancement will be additionally applied to: GOA: Gas Oil 1-Month CSO; BRZ: Brent...
Self-adaptive Solution Strategies
NASA Technical Reports Server (NTRS)
Padovan, J.
1984-01-01
The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.
Evidence for atmospheric carbon dioxide variability over the Gulf Stream
NASA Technical Reports Server (NTRS)
Bufton, J. L.
1984-01-01
Two airborne surveys of atmospheric carbon dioxide concentration have been conducted over the Gulf Stream off the east coast of Virginia and North Carolina on September 7-8, 1983. In situ CO2 data were acquired at an aircraft altitude of 300 m on trajectories that transcected the Gulf Stream near 36 deg N 73 deg W. Data show evidence of a CO2 concentration increase by 4 ppm to 15 ppm above the nominal atmospheric background value of 345 ppm. These enhanced values were associated with the physical location of the Gulf Stream prior to the passage of a weak cold front.
Method for processing aqueous wastes
Pickett, J.B.; Martin, H.L.; Langton, C.A.; Harley, W.W.
1993-12-28
A method is presented for treating waste water such as that from an industrial processing facility comprising the separation of the waste water into a dilute waste stream and a concentrated waste stream. The concentrated waste stream is treated chemically to enhance precipitation and then allowed to separate into a sludge and a supernate. The supernate is skimmed or filtered from the sludge and blended with the dilute waste stream to form a second dilute waste stream. The sludge remaining is mixed with cementitious material, rinsed to dissolve soluble components, then pressed to remove excess water and dissolved solids before being allowed to cure. The dilute waste stream is also chemically treated to decompose carbonate complexes and metal ions and then mixed with cationic polymer to cause the precipitated solids to flocculate. Filtration of the flocculant removes sufficient solids to allow the waste water to be discharged to the surface of a stream. The filtered material is added to the sludge of the concentrated waste stream. The method is also applicable to the treatment and removal of soluble uranium from aqueous streams, such that the treated stream may be used as a potable water supply. 4 figures.
Stream Temperature Estimation From Thermal Infrared Images
NASA Astrophysics Data System (ADS)
Handcock, R. N.; Kay, J. E.; Gillespie, A.; Naveh, N.; Cherkauer, K. A.; Burges, S. J.; Booth, D. B.
2001-12-01
Stream temperature is an important water quality indicator in the Pacific Northwest where endangered fish populations are sensitive to elevated water temperature. Cold water refugia are essential for the survival of threatened salmon when events such as the removal of riparian vegetation result in elevated stream temperatures. Regional assessment of stream temperatures is limited by sparse sampling of temperatures in both space and time. If critical watersheds are to be properly managed it is necessary to have spatially extensive temperature measurements of known accuracy. Remotely sensed thermal infrared (TIR) imagery can be used to derive spatially distributed estimates of the skin temperature (top 100 nm) of streams. TIR imagery has long been used to estimate skin temperatures of the ocean, where split-window techniques have been used to compensate for atmospheric affects. Streams are a more complex environment because 1) most are unresolved in typical TIR images, and 2) the near-bank environment of stream corridors may consist of tall trees or hot rocks and soils that irradiate the stream surface. As well as compensating for atmospheric effects, key problems to solve in estimating stream temperatures include both subpixel unmixing and multiple scattering. Additionally, fine resolution characteristics of the stream surface such as evaporative cooling due to wind, and water surface roughness, will effect measurements of radiant skin temperatures with TIR devices. We apply these corrections across the Green River and Yakima River watersheds in Washington State to assess the accuracy of remotely sensed stream surface temperature estimates made using fine resolution TIR imagery from a ground-based sensor (FLIR), medium resolution data from the airborne MASTER sensor, and coarse-resolution data from the Terra-ASTER satellite. We use linear spectral mixture analysis to isolate the fraction of land-leaving radiance originating from unresolved streams. To compensate the data for atmospheric effects we combine radiosonde profiles with a physically based radiative transfer model (MODTRAN) and an in-scene relative correction adapted from the ISAC algorithm. Laboratory values for water emissivities are used as a baseline estimate of stream emissivities. Emitted radiance reflected by trees in the stream near-bank environment is estimated from the height and canopy temperature, using a radiosity model.
Enhanced MHT encryption scheme for chosen plaintext attack
NASA Astrophysics Data System (ADS)
Xie, Dahua; Kuo, C. C. J.
2003-11-01
Efficient multimedia encryption algorithms play a key role in multimedia security protection. One multimedia encryption algorithm known as the MHT (Multiple Huffman Tables) method was recently developed by Wu and Kuo. Even though MHT has many desirable properties, it is vulnerable to the chosen-plaintext attack (CPA). An enhanced MHT algorithm is proposed in this work to overcome this drawback. It is proved mathematically that the proposed algorithm is secure against the chosen plaintext attack.
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054
Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement.
`Skinny Milky Way please', says Sagittarius
NASA Astrophysics Data System (ADS)
Gibbons, S. L. J.; Belokurov, V.; Evans, N. W.
2014-12-01
Motivated by recent observations of the Sagittarius stream, we devise a rapid algorithm to generate faithful representations of the centroids of stellar tidal streams formed in a disruption of a progenitor of an arbitrary mass in an arbitrary potential. Our method works by releasing swarms of test particles at the Lagrange points around the satellite and subsequently evolving them in a combined potential of the host and the progenitor. We stress that the action of the progenitor's gravity is crucial to making streams that look almost indistinguishable from the N-body realizations, as indeed ours do. The method is tested on mock stream data in three different Milky Way potentials with increasing complexity, and is shown to deliver unbiased inference on the Galactic mass distribution out to large radii. When applied to the observations of the Sagittarius stream, our model gives a natural explanation of the stream's apocentric distances and the differential orbital precession. We, therefore, provide a new independent measurement of the Galactic mass distribution beyond 50 kpc. The Sagittarius stream model favours a light Milky Way with the mass 4.1 ± 0.4 × 1011 M⊙ at 100 kpc, which can be extrapolated to 5.6 ± 1.2 × 1011 M⊙ at 200 kpc. Such a low mass for the Milky Way Galaxy is in good agreement with estimates from the kinematics of halo stars and from the satellite galaxies (once Leo I is removed from the sample). It entirely removes the `Too Big To Fail Problem'.
StreamSqueeze: a dynamic stream visualization for monitoring of event data
NASA Astrophysics Data System (ADS)
Mansmann, Florian; Krstajic, Milos; Fischer, Fabian; Bertini, Enrico
2012-01-01
While in clear-cut situations automated analytical solution for data streams are already in place, only few visual approaches have been proposed in the literature for exploratory analysis tasks on dynamic information. However, due to the competitive or security-related advantages that real-time information gives in domains such as finance, business or networking, we are convinced that there is a need for exploratory visualization tools for data streams. Under the conditions that new events have higher relevance and that smooth transitions enable traceability of items, we propose a novel dynamic stream visualization called StreamSqueeze. In this technique the degree of interest of recent items is expressed through an increase in size and thus recent events can be shown with more details. The technique has two main benefits: First, the layout algorithm arranges items in several lists of various sizes and optimizes the positions within each list so that the transition of an item from one list to the other triggers least visual changes. Second, the animation scheme ensures that for 50 percent of the time an item has a static screen position where reading is most effective and then continuously shrinks and moves to the its next static position in the subsequent list. To demonstrate the capability of our technique, we apply it to large and high-frequency news and syslog streams and show how it maintains optimal stability of the layout under the conditions given above.
Streaming Video to Enhance Students' Reflection in Dance Education
ERIC Educational Resources Information Center
Leijen, Ali; Lam, Ineke; Wildschut, Liesbeth; Simons, P. Robert-Jan; Admiraal, Wilfried
2009-01-01
This paper presents an evaluation case study that describes the experiences of 15 students and 2 teachers using a video-based learning environment, DiViDU, to facilitate students' daily reflection activities in a composition course and a ballet course. To support dance students' reflection processes streaming video was applied as follows: video…
Live texturing of augmented reality characters from colored drawings.
Magnenat, Stéphane; Ngo, Dat Tien; Zünd, Fabio; Ryffel, Mattia; Noris, Gioacchino; Rothlin, Gerhard; Marra, Alessia; Nitti, Maurizio; Fua, Pascal; Gross, Markus; Sumner, Robert W
2015-11-01
Coloring books capture the imagination of children and provide them with one of their earliest opportunities for creative expression. However, given the proliferation and popularity of digital devices, real-world activities like coloring can seem unexciting, and children become less engaged in them. Augmented reality holds unique potential to impact this situation by providing a bridge between real-world activities and digital enhancements. In this paper, we present an augmented reality coloring book App in which children color characters in a printed coloring book and inspect their work using a mobile device. The drawing is detected and tracked, and the video stream is augmented with an animated 3-D version of the character that is textured according to the child's coloring. This is possible thanks to several novel technical contributions. We present a texturing process that applies the captured texture from a 2-D colored drawing to both the visible and occluded regions of a 3-D character in real time. We develop a deformable surface tracking method designed for colored drawings that uses a new outlier rejection algorithm for real-time tracking and surface deformation recovery. We present a content creation pipeline to efficiently create the 2-D and 3-D content. And, finally, we validate our work with two user studies that examine the quality of our texturing algorithm and the overall App experience.
Clustering and Flow Conservation Monitoring Tool for Software Defined Networks
Puente Fernández, Jesús Antonio
2018-01-01
Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches. PMID:29614049
GRay: A MASSIVELY PARALLEL GPU-BASED CODE FOR RAY TRACING IN RELATIVISTIC SPACETIMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparingmore » theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.« less
NASA Astrophysics Data System (ADS)
Teaby, A.; Johnson, E. R.; Griffin, M.; Carrillo, C.; Kannan, T.; Shupe, J. W.; Schmidt, C.
2013-12-01
Historic trends reveal extreme precipitation variability within the Yosemite National Park (YNP) geographic region. While California obtains greater than half of its annual water supply from the Sierra Nevada, snowpack, precipitation, and runoff can fluctuate between less than 50% and greater than 200% of climatological averages. Advances in hydrological modeling are crucial to improving water-use efficiency at the local, state, and national levels. The NASA Carnegie Ames Stanford Approach (CASA) is a global simulation model that combines multi-year satellite, climate, and other land surface databases to estimate biosphere-atmosphere exchange of energy, water, and trace gases from plants and soils. By coupling CASA with a Hydrological Routing Algorithm known as HYDRA, it is possible to calculate current water availability and observe hydrological trends within YNP. Satellite-derived inputs such as surface evapotranspiration, temperature, precipitation, land cover, and elevation were included to create a valuable decision support tool for YNP's water resource managers. These results will be of enhanced importance given current efforts to restore 81 miles of the Merced River within the park's boundary. Validations of model results were conducted using in situ stream gage measurements. The model accurately simulated observed streamflow values, achieving a relatively strong Nash-Sutcliffe model efficiency coefficient. This geospatial assessment provides a standardized method which may be repeated in both national and international water-stressed regions.
NASA Astrophysics Data System (ADS)
Mouzon, N. R.; Null, S. E.
2014-12-01
Human impacts from land and water development have degraded water quality and altered the physical, chemical, and biological integrity of Nevada's Walker River. Reduced instream flows and increased nutrient concentrations affect native fish populations through warm daily stream temperatures and low nightly dissolved oxygen concentrations. Water rights purchases are being considered to maintain instream flows, improve water quality, and enhance habitat for native fish species, such as Lahontan cutthroat trout. This study uses the River Modeling System (RMSv4), an hourly, physically-based hydrodynamic and water quality model, to estimate streamflows, temperatures, and dissolved oxygen concentrations in the Walker River. We simulate thermal and dissolved oxygen changes from increased streamflow to prioritize the time periods and locations that water purchases most enhance native trout habitat. Stream temperatures and dissolved oxygen concentrations are proxies for trout habitat. Monitoring results indicate stream temperature and dissolved oxygen limitations generally exist in the 115 kilometers upstream of Walker Lake (about 37% of the study area) from approximately May through September, and this reach currently acts as a water quality barrier for fish passage.
A Method for Calculating the Mean Orbits of Meteor Streams
NASA Astrophysics Data System (ADS)
Voloshchuk, Yu. I.; Kashcheev, B. L.
An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.
The land value impacts of wetland restoration.
Kaza, Nikhil; BenDor, Todd K
2013-09-30
U.S. regulations require offsets for aquatic ecosystems damaged during land development, often through restoration of alternative resources. What effect does large-scale wetland and stream restoration have on surrounding land values? Restoration effects on real estate values have substantial implications for protecting resources, increasing tax base, and improving environmental policies. Our analysis focuses on the three-county Raleigh-Durham-Chapel Hill, North Carolina region, which has experienced rapid development and extensive aquatic ecological restoration (through the state's Ecosystem Enhancement Program [EEP]). Since restoration sites are not randomly distributed across space, we used a genetic algorithm to match parcels near restoration sites with comparable control parcels. Similar to propensity score analysis, this technique facilitates statistical comparison and isolates the effects of restoration sites on surrounding real estate values. Compared to parcels not proximate to any aquatic resources, we find that, 1) natural aquatic systems steadily and significantly increase parcel values up to 0.75 mi away, and 2) parcels <0.5 mi from EEP restoration sites have significantly lower sale prices, while 3) parcels >0.5 mi from EEP sites gain substantial amenity value. When we control for intervening water bodies (e.g. un-restored streams and wetlands), we find a similar inflection point whereby parcels <0.5 mi from EEP sites exhibit lower values, and sites 0.5-0.75 mi away exhibit increased values. Our work points to the need for higher public visibility of aquatic ecosystem restoration programs and increased public information about their value. Copyright © 2013 Elsevier Ltd. All rights reserved.
Data compression using Chebyshev transform
NASA Technical Reports Server (NTRS)
Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)
2007-01-01
The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.
Mixing enhancement of reacting parallel fuel jets in a supersonic combustor
NASA Technical Reports Server (NTRS)
Drummond, J. P.
1991-01-01
Pursuant to a NASA-Langley development program for a scramjet HST propulsion system entailing the optimization of the scramjet combustor's fuel-air mixing and reaction characteristics, a numerical study has been conducted of the candidate parallel fuel injectors. Attention is given to a method for flow mixing-process and combustion-efficiency enhancement in which a supersonic circular hydrogen jet coflows with a supersonic air stream. When enhanced by a planar oblique shock, the injector configuration exhibited a substantial degree of induced vorticity in the fuel stream which increased mixing and chemical reaction rates, relative to the unshocked configuration. The resulting heat release was effective in breaking down the stable hydrogen vortex pair that had inhibited more extensive fuel-air mixing.
Real-time spectrum estimation–based dual-channel speech-enhancement algorithm for cochlear implant
2012-01-01
Background Improvement of the cochlear implant (CI) front-end signal acquisition is needed to increase speech recognition in noisy environments. To suppress the directional noise, we introduce a speech-enhancement algorithm based on microphone array beamforming and spectral estimation. The experimental results indicate that this method is robust to directional mobile noise and strongly enhances the desired speech, thereby improving the performance of CI devices in a noisy environment. Methods The spectrum estimation and the array beamforming methods were combined to suppress the ambient noise. The directivity coefficient was estimated in the noise-only intervals, and was updated to fit for the mobile noise. Results The proposed algorithm was realized in the CI speech strategy. For actual parameters, we use Maxflat filter to obtain fractional sampling points and cepstrum method to differentiate the desired speech frame and the noise frame. The broadband adjustment coefficients were added to compensate the energy loss in the low frequency band. Discussions The approximation of the directivity coefficient is tested and the errors are discussed. We also analyze the algorithm constraint for noise estimation and distortion in CI processing. The performance of the proposed algorithm is analyzed and further be compared with other prevalent methods. Conclusions The hardware platform was constructed for the experiments. The speech-enhancement results showed that our algorithm can suppresses the non-stationary noise with high SNR. Excellent performance of the proposed algorithm was obtained in the speech enhancement experiments and mobile testing. And signal distortion results indicate that this algorithm is robust with high SNR improvement and low speech distortion. PMID:23006896
Andermatt, Simon; Papadopoulou, Athina; Radue, Ernst-Wilhelm; Sprenger, Till; Cattin, Philippe
2017-09-01
Some gadolinium-enhancing multiple sclerosis (MS) lesions remain T1-hypointense over months ("persistent black holes, BHs") and represent areas of pronounced tissue loss. A reduced conversion of enhancing lesions to persistent BHs could suggest a favorable effect of a medication on tissue repair. However, the individual tracking of enhancing lesions can be very time-consuming in large clinical trials. We created a semiautomated workflow for tracking the evolution of individual MS lesions, to calculate the proportion of enhancing lesions becoming persistent BHs at follow-up. Our workflow automatically coregisters, compares, and detects overlaps between lesion masks at different time points. We tested the algorithm in a data set of Magnetic Resonance images (1.5 and 3T; spin-echo T1-sequences) from a phase 3 clinical trial (n = 1,272), in which all enhancing lesions and all BHs had been previously segmented at baseline and year 2. The algorithm analyzed the segmentation masks in a longitudinal fashion to determine which enhancing lesions at baseline turned into BHs at year 2. Images of 50 patients (192 enhancing lesions) were also reviewed by an experienced MRI rater, blinded to the algorithm results. In this MRI data set, there were no cases that could not be processed by the algorithm. At year 2, 417 lesions were classified as persistent BHs (417/1,613 = 25.9%). The agreement between the rater and the algorithm was > 98%. Due to the semiautomated procedure, this algorithm can be of great value in the analysis of large clinical trials, when a rater-based analysis would be time-consuming. Copyright © 2017 by the American Society of Neuroimaging.
Streaming potential generated by a pressure-driven flow over a super-hydrophobic surface
NASA Astrophysics Data System (ADS)
Zhao, Hui
2010-11-01
The streaming potential generated by a pressured-driven flow over a weakly charged striped slip-stick surface (the zeta potential of the surface is smaller than the thermal potential (25 mV) with an arbitrary double layer thickness is theoretically studied by solving the Poisson-Boltzmann equation and Stokes equation. A series solution of the streaming potential is derived. Approximate expressions for the streaming potential in the limits of thin double layers and thick double layers are also presented, in excellent agreement with the full solution. The streaming potential is compared against that over a homogenously charged smooth surface. Our results indicate that the streaming potential over a super-hydrophobic surface only can be enhanced when the liquid-gas interface is charged. In addition, as the double layer thickness increases, the advantage of the super-hydrophobic surface diminishes. The impact of a slip-stick surface on the streaming potential might provide guidance for designing novel and efficient microfludic energy conversion devices using a super-hydrophobic surface.
Large thermo-erosional tunnel for a river in northeast Greenland
NASA Astrophysics Data System (ADS)
Docherty, Catherine L.; Hannah, David M.; Riis, Tenna; Rosenhøj Leth, Simon; Milner, Alexander M.
2017-12-01
Thermo-erosional river bank undercutting is caused by the combined action of thermal and mechanical erosion of the permafrost by Arctic rivers whilst the overlying sediment withstands collapse temporarily. Here, we report the discovery of a large thermo-erosional tunnel that formed in the banks of a meltwater-fed stream in northeast Greenland in summer 2015. The tunnel was observed over eight days (14-22 July), during which period the tunnel remained open but bank-side slumping increased. Stream solute load increased immediately downstream and remained high 800 m from the tunnel. Whilst this field observation was opportunistic and information somewhat limited, our study provides a rare insight into an extreme event impacting permafrost, local geomorphology and stream habitat. With accelerated climate change in Arctic regions, increased permafrost degradation and warmer stream water temperature are predicted thereby enhancing potential for thermo-erosional niche development and associated stream bank slumping. This change could have significant implications for stream physicochemical habitat and, in turn, stream benthic communities, through changes in aquatic habitat conditions.
Thomaz, Edivaldo L; Peretto, Gustavo T
2016-04-15
Unpaved roads are ubiquitous features that have been transforming the landscape through human history. Unpaved roads affect the water and sediment pathways through a catchment and impacts the aquatic ecosystem. In this study, we describe the effect of unpaved road on the hydrogeomorphic connectivity at the rural headwater scale. Measurement was based on the stream crossing approach, i.e., road superimposing the drainage system. We installed a Parshall flume coupled with single-stage suspended sediment sampler at each stream crossing. In addition, we displayed our monitoring scheme with an upscaling perspective from second-order to third-order stream. We concluded that the road-stream coupling dramatically changed the stream dynamic. The increase of discharge caused by roads at the headwater was 50% larger compared to unaffected streams. Additionally, suspended sediment concentration enhancement at stream crossings ranged from to 413% at second-order streams to 145% at third-order streams. The landform characteristics associated with the road network produced an important hydrogeomorphic disruption in the landscape. As a result, the sediment filter function of the riparian zone was reduced dramatically. Therefore, we recommend that projects for aquatic system restoration or conservation in rural landscape consider the role of the road network on stream dynamics. Copyright © 2016 Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-27
... enhancement to the SPAN for the ICE Margining algorithm employed to calculate Original Margin. All capitalized... Allocation Methodology is an enhancement to the SPAN[supreg] \\6\\ for the ICE Margining algorithm employed to... the SPAN margin calculation algorithm itself has not been changed. As of August 30, 2011, Position...
Reduction in time-to-sleep through EEG based brain state detection and audio stimulation.
Zhuo Zhang; Cuntai Guan; Ti Eu Chan; Juanhong Yu; Aung Aung Phyo Wai; Chuanchu Wang; Haihong Zhang
2015-08-01
We developed an EEG- and audio-based sleep sensing and enhancing system, called iSleep (interactive Sleep enhancement apparatus). The system adopts a closed-loop approach which optimizes the audio recording selection based on user's sleep status detected through our online EEG computing algorithm. The iSleep prototype comprises two major parts: 1) a sleeping mask integrated with a single channel EEG electrode and amplifier, a pair of stereo earphones and a microcontroller with wireless circuit for control and data streaming; 2) a mobile app to receive EEG signals for online sleep monitoring and audio playback control. In this study we attempt to validate our hypothesis that appropriate audio stimulation in relation to brain state can induce faster onset of sleep and improve the quality of a nap. We conduct experiments on 28 healthy subjects, each undergoing two nap sessions - one with a quiet background and one with our audio-stimulation. We compare the time-to-sleep in both sessions between two groups of subjects, e.g., fast and slow sleep onset groups. The p-value obtained from Wilcoxon Signed Rank Test is 1.22e-04 for slow onset group, which demonstrates that iSleep can significantly reduce the time-to-sleep for people with difficulty in falling sleep.
NASA Astrophysics Data System (ADS)
Akbar, Noreen Sher; Tripathi, Dharmendra; Khan, Zafar Hayat; Bég, O. Anwar
2016-09-01
In this paper, a mathematical study is conducted of steady incompressible flow of a temperature-dependent viscous nanofluid from a vertical stretching sheet under applied external magnetic field and gravitational body force effects. The Reynolds exponential viscosity model is deployed. Electrically-conducting nanofluids are considered which comprise a suspension of uniform dimension nanoparticles suspended in viscous base fluid. The nanofluid sheet is extended with a linear velocity in the axial direction. The Buonjiornio model is utilized which features Brownian motion and thermophoresis effects. The partial differential equations for mass, momentum, energy and species (nano-particle concentration) are formulated with magnetic body force term. Viscous and Joule dissipation effects are neglected. The emerging nonlinear, coupled, boundary value problem is solved numerically using the Runge-Kutta fourth order method along with a shooting technique. Graphical solutions for velocity, temperature, concentration field, skin friction and Nusselt number are presented. Furthermore stream function plots are also included. Validation with Nakamura's finite difference algorithm is included. Increasing nanofluid viscosity is observed to enhance temperatures and concentrations but to reduce velocity magnitudes. Nusselt number is enhanced with both thermal and species Grashof numbers whereas it is reduced with increasing thermophoresis parameter and Schmidt number. The model is applicable in nano-material manufacturing processes involving extruding sheets.
NASA Astrophysics Data System (ADS)
Knapp, Julia L. A.; Cirpka, Olaf A.
2017-06-01
The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.
Bradfield, A.D.
1986-01-01
Coal-mining impacts on Smoky Creek, eastern Tennessee were evaluated using water quality and benthic invertebrate data. Data from mined sites were also compared with water quality and invertebrate fauna found at Crabapple Branch, an undisturbed stream in a nearby basin. Although differences in water quality constituent concentrations and physical habitat conditions at sampling sites were apparent, commonly used measures of benthic invertebrate sample data such as number of taxa, sample diversity, number of organisms, and biomass were inadequate for determining differences in stream environments. Clustering algorithms were more useful in determining differences in benthic invertebrate community structure and composition. Normal (collections) and inverse (species) analyses based on presence-absence data of species of Ephemeroptera, Plecoptera, and Tricoptera were compared using constancy, fidelity, and relative abundance of species found at stations with similar fauna. These analyses identified differences in benthic community composition due to seasonal variations in invertebrate life histories. When data from a single season were examined, sites on tributary streams generally clustered separately from sites on Smoky Creek. These analyses compared with differences in water quality, stream size, and substrate characteristics between tributary sites and the more degraded main stem sites, indicated that numerical classification of invertebrate data can provide discharge-independent information useful in rapid evaluations of in-stream environmental conditions. (Author 's abstract)
Synthesis of ordered L10-type FeNi nanoparticles
Pinkerton, Frederick E.
2015-09-22
Particles of iron and nickel are added to a flowing plasma stream which does not chemically alter the iron or nickel. The iron and nickel are heated and vaporized in the stream, and then a cryogenic fluid is added to the stream to rapidly cause the formation of nanometer size particles of iron and nickel. The particles are separated from the stream. The particles are preferably formed as single crystals in which the iron and nickel atoms are organized in a tetragonal L1.sub.0 crystal structure which displays magnetic anisotropy. A minor portion of an additive, such as titanium, vanadium, aluminum, boron, carbon, phosphorous, or sulfur, may be added to the plasma stream with the iron and nickel to enhance formation of the desired crystal structure.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2004-12-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2005-01-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Thermodynamic and economic analysis of heat pumps for energy recovery in industrial processes
NASA Astrophysics Data System (ADS)
Urdaneta-B, A. H.; Schmidt, P. S.
1980-09-01
A computer code has been developed for analyzing the thermodynamic performance, cost and economic return for heat pump applications in industrial heat recovery. Starting with basic defining characteristics of the waste heat stream and the desired heat sink, the algorithm first evaluates the potential for conventional heat recovery with heat exchangers, and if applicable, sizes the exchanger. A heat pump system is then designed to process the residual heating and cooling requirements of the streams. In configuring the heat pump, the program searches a number of parameters, including condenser temperature, evaporator temperature, and condenser and evaporator approaches. All system components are sized for each set of parameters, and economic return is estimated and compared with system economics for conventional processing of the heated and cooled streams (i.e., with process heaters and coolers). Two case studies are evaluated, one in a food processing application and the other in an oil refinery unit.
A knowledge-based framework for image enhancement in aviation security.
Singh, Maneesha; Singh, Sameer; Partridge, Derek
2004-12-01
The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.
An End-to-End Loss Discrimination Scheme for Multimedia Transmission over Wireless IP Networks
NASA Astrophysics Data System (ADS)
Zhao, Hai-Tao; Dong, Yu-Ning; Li, Yang
As the rapid growth of wireless IP networks, wireless IP access networks have a lot of potential applications in a variety of fields in civilian and military environments. Many of these applications, such as realtime audio/video streaming, will require some form of end-to-end QoS assurance. In this paper, an algorithm WMPLD (Wireless Multimedia Packet Loss Discrimination) is proposed for multimedia transmission control over wired-wireless hybrid IP networks. The relationship between packet length and packet loss rate in the Gilbert wireless error model is investigated. Furthermore, the algorithm can detect the nature of packet losses by sending large and small packets alternately, and control the sending rate of nodes. In addition, by means of updating factor K, this algorithm can adapt to the changes of network states quickly. Simulation results show that, compared to previous algorithms, WMPLD algorithm can improve the networks throughput as well as reduce the congestion loss rate in various situations.
Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling
NASA Astrophysics Data System (ADS)
Saksena, S.; Dey, S.; Merwade, V.
2016-12-01
Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.
Concentrating small particles in protoplanetary disks through the streaming instability
NASA Astrophysics Data System (ADS)
Yang, C.-C.; Johansen, A.; Carrera, D.
2017-10-01
Laboratory experiments indicate that direct growth of silicate grains via mutual collisions can only produce particles up to roughly millimeters in size. On the other hand, recent simulations of the streaming instability have shown that mm/cm-sized particles require an excessively high metallicity for dense filaments to emerge. Using a numerical algorithm for stiff mutual drag force, we perform simulations of small particles with significantly higher resolutions and longer simulation times than in previous investigations. We find that particles of dimensionless stopping time τs = 10-2 and 10-3 - representing cm- and mm-sized particles interior of the water ice line - concentrate themselves via the streaming instability at a solid abundance of a few percent. We thus revise a previously published critical solid abundance curve for the regime of τs ≪ 1. The solid density in the concentrated regions reaches values higher than the Roche density, indicating that direct collapse of particles down to mm sizes into planetesimals is possible. Our results hence bridge the gap in particle size between direct dust growth limited by bouncing and the streaming instability.
Audio-video feature correlation: faces and speech
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal
1999-08-01
This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.
Phellan, Renzo; Forkert, Nils D
2017-11-01
Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM, because vessels may vary from its tubular-shape in this case. Vessel enhancement algorithms can help to improve the accuracy of the segmentation of the vascular system. However, their contribution to accuracy has to be evaluated as it depends on the specific applications, and in some cases it can lead to a reduction of the overall accuracy. No specific filter was suitable for all tested scenarios. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhang, Wei; Yan, Shaoze
2015-10-01
In this paper, a multi-scale image enhancement algorithm based on low-passing filtering and nonlinear transformation is proposed for infrared testing image of the de-bonding defect in solid propellant rocket motors. Infrared testing images with high-level noise and low contrast are foundations for identifying defects and calculating the defects size. In order to improve quality of the infrared image, according to distribution properties of the detection image, within framework of stationary wavelet transform, the approximation coefficients at suitable decomposition level is processed by index low-passing filtering by using Fourier transform, after that, the nonlinear transformation is applied to further process the figure to improve the picture contrast. To verify validity of the algorithm, the image enhancement algorithm is applied to infrared testing pictures of two specimens with de-bonding defect. Therein, one specimen is made of a type of high-strength steel, and the other is a type of carbon fiber composite. As the result shown, in the images processed by the image enhancement algorithm presented in the paper, most of noises are eliminated, and contrast between defect areas and normal area is improved greatly; in addition, by using the binary picture of the processed figure, the continuous defect edges can be extracted, all of which show the validity of the algorithm. The paper provides a well-performing image enhancement algorithm for the infrared thermography.
Fuzzy Logic Based Anomaly Detection for Embedded Network Security Cyber Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Jason Wright
Resiliency and security in critical infrastructure control systems in the modern world of cyber terrorism constitute a relevant concern. Developing a network security system specifically tailored to the requirements of such critical assets is of a primary importance. This paper proposes a novel learning algorithm for anomaly based network security cyber sensor together with its hardware implementation. The presented learning algorithm constructs a fuzzy logic rule based model of normal network behavior. Individual fuzzy rules are extracted directly from the stream of incoming packets using an online clustering algorithm. This learning algorithm was specifically developed to comply with the constrainedmore » computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental test-bed mimicking the environment of a critical infrastructure control system.« less
Chemodynamical Clustering Applied to APOGEE Data: Rediscovering Globular Clusters
NASA Astrophysics Data System (ADS)
Chen, Boquan; D’Onghia, Elena; Pardy, Stephen A.; Pasquali, Anna; Bertelli Motta, Clio; Hanlon, Bret; Grebel, Eva K.
2018-06-01
We have developed a novel technique based on a clustering algorithm that searches for kinematically and chemically clustered stars in the APOGEE DR12 Cannon data. As compared to classical chemical tagging, the kinematic information included in our methodology allows us to identify stars that are members of known globular clusters with greater confidence. We apply our algorithm to the entire APOGEE catalog of 150,615 stars whose chemical abundances are derived by the Cannon. Our methodology found anticorrelations between the elements Al and Mg, Na and O, and C and N previously identified in the optical spectra in globular clusters, even though we omit these elements in our algorithm. Our algorithm identifies globular clusters without a priori knowledge of their locations in the sky. Thus, not only does this technique promise to discover new globular clusters, but it also allows us to identify candidate streams of kinematically and chemically clustered stars in the Milky Way.
Seismic and acoustic signal identification algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
LADD,MARK D.; ALAM,M. KATHLEEN; SLEEFE,GERARD E.
2000-04-03
This paper will describe an algorithm for detecting and classifying seismic and acoustic signals for unattended ground sensors. The algorithm must be computationally efficient and continuously process a data stream in order to establish whether or not a desired signal has changed state (turned-on or off). The paper will focus on describing a Fourier based technique that compares the running power spectral density estimate of the data to a predetermined signature in order to determine if the desired signal has changed state. How to establish the signature and the detection thresholds will be discussed as well as the theoretical statisticsmore » of the algorithm for the Gaussian noise case with results from simulated data. Actual seismic data results will also be discussed along with techniques used to reduce false alarms due to the inherent nonstationary noise environments found with actual data.« less
A novel image encryption algorithm using chaos and reversible cellular automata
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Luan, Dapeng
2013-11-01
In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.
Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation
NASA Astrophysics Data System (ADS)
Christ, John A.; Goltz, Mark N.
2002-05-01
We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.
NASA Astrophysics Data System (ADS)
Daniels, M. D.; Kerkez, B.; Chandrasekar, V.; Graves, S. J.; Stamps, D. S.; Dye, M. J.; Keiser, K.; Martin, C. L.; Gooch, S. R.
2016-12-01
Cloud-Hosted Real-time Data Services for the Geosciences, or CHORDS, addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Part of the broader EarthCube initiative, CHORDS seeks to investigate the role of real-time data in the geosciences. Many of the phenomenon occurring within the geosciences, ranging from hurricanes and severe weather, to earthquakes, volcanoes and floods, can benefit from better handling of real-time data. The National Science Foundation funds many small teams of researchers residing at Universities whose currently inaccessible measurements could contribute to a better understanding of these phenomenon in order to ultimately improve forecasts and predictions. This lack of easy accessibility prohibits advanced algorithm and workflow development that could be initiated or enhanced by these data streams. Often the development of tools for the broad dissemination of their valuable real-time data is a large IT overhead from a pure scientific perspective, and could benefit from an easy to use, scalable, cloud-based solution to facilitate access. CHORDS proposes to make a very diverse suite of real-time data available to the broader geosciences community in order to allow innovative new science in these areas to thrive. We highlight the recently developed CHORDS portal tools and processing systems aimed at addressing some of the gaps in handling real-time data, particularly in the provisioning of data from the "long-tail" scientific community through a simple interface deployed in the cloud. Examples shown include hydrology, atmosphere and solid earth sensors. Broad use of the CHORDS framework will expand the role of real-time data within the geosciences, and enhance the potential of streaming data sources to enable adaptive experimentation and real-time hypothesis testing. CHORDS enables real-time data to be discovered and accessed using existing standards for straightforward integration into analysis, visualization and modeling tools.
A hardware architecture for real-time shadow removal in high-contrast video
NASA Astrophysics Data System (ADS)
Verdugo, Pablo; Pezoa, Jorge E.; Figueroa, Miguel
2017-09-01
Broadcasting an outdoor sports event at daytime is a challenging task due to the high contrast that exists between areas in the shadow and light conditions within the same scene. Commercial cameras typically do not handle the high dynamic range of such scenes in a proper manner, resulting in broadcast streams with very little shadow detail. We propose a hardware architecture for real-time shadow removal in high-resolution video, which reduces the shadow effect and simultaneously improves shadow details. The algorithm operates only on the shadow portions of each video frame, thus improving the results and producing more realistic images than algorithms that operate on the entire frame, such as simplified Retinex and histogram shifting. The architecture receives an input in the RGB color space, transforms it into the YIQ space, and uses color information from both spaces to produce a mask of the shadow areas present in the image. The mask is then filtered using a connected components algorithm to eliminate false positives and negatives. The hardware uses pixel information at the edges of the mask to estimate the illumination ratio between light and shadow in the image, which is then used to correct the shadow area. Our prototype implementation simultaneously processes up to 7 video streams of 1920×1080 pixels at 60 frames per second on a Xilinx Kintex-7 XC7K325T FPGA.
Elizabeth A. Eschenbach; Rebecca Teasley; Carlos Diaz; Mary Ann Madej
2007-01-01
Sediment contributions from unpaved forest roads have contributed to the degradation of anadromous fisheries streams in the Pacific Northwest.Efforts to reduce this degradation have included road decommissioning and road upgrading. These expensive activities have usually been implemented on a site specific basis without considering the sediment...
Under EPA’s Green Infrastructure Initiative, a variety of research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. One aspect of this is evaluating th...
A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher
NASA Astrophysics Data System (ADS)
Kumari, Manju; Gupta, Shailender
2018-03-01
As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.
NASA Astrophysics Data System (ADS)
Riera-Palou, Felip; den Brinker, Albertus C.
2007-12-01
This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).
PLASMA EMISSION BY COUNTER-STREAMING ELECTRON BEAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziebell, L. F.; Petruzzellis, L. T.; Gaelzer, R.
2016-02-10
The radiation emission mechanism responsible for both type-II and type-III solar radio bursts is commonly accepted as plasma emission. Recently Ganse et al. suggested that type-II radio bursts may be enhanced when the electron foreshock geometry of a coronal mass ejection contains a double hump structure. They reasoned that the counter-streaming electron beams that exist between the double shocks may enhance the nonlinear coalescence interaction, thereby giving rise to more efficient generation of radiation. Ganse et al. employed a particle-in-cell simulation to study such a scenario. The present paper revisits the same problem with EM weak turbulence theory, and showmore » that the fundamental (F) emission is not greatly affected by the presence of counter-streaming beams, but the harmonic (H) emission becomes somewhat more effective when the two beams are present. The present finding is thus complementary to the work by Ganse et al.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koester, Petra; Cecchetti, Carlo A.; Booth, Nicola
2015-02-15
The high-current fast electron beams generated in high-intensity laser-solid interactions require the onset of a balancing return current in order to propagate in the target material. Such a system of counter-streaming electron currents is unstable to a variety of instabilities such as the current-filamentation instability and the two-stream instability. An experimental study aimed at investigating the role of instabilities in a system of symmetrical counter-propagating fast electron beams is presented here for the first time. The fast electron beams are generated by double-sided laser-irradiation of a layered target foil at laser intensities above 10{sup 19 }W/cm{sup 2}. High-resolution X-ray spectroscopy ofmore » the emission from the central Ti layer shows that locally enhanced energy deposition is indeed achieved in the case of counter-propagating fast electron beams.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechant, Lawrence J.
We examine the role of periodic sinusoidal free-stream disturbances on the inner law law-of-the-wall (log-law) for turbulent boundary layers. This model serves a surrogate for the interaction of flight vehicles with atmospheric disturbances. The approximate skin friction expression that is derived suggests that free-stream disturbances can cause enhancement of the mean skin friction. Considering the influence of grid generated free stream turbulence in the laminar sublayer/log law region (small scale/high frequency) the model recovers the well-known shear layer enhancement suggesting an overall validity for the approach. The effect on the wall shear associated with the lower frequency due to themore » passage of the vehicle through large (vehicle scale) atmospheric disturbances is likely small i.e. on the order 1% increase for turbulence intensities on the order of 2%. The increase in wall pressure fluctuation which is directly proportional to the wall shear stress is correspondingly small.« less
Modeling the Impact of Stream Discharge Events on Riparian Solute Dynamics.
Mahmood, Muhammad Nasir; Schmidt, Christian; Fleckenstein, Jan H; Trauth, Nico
2018-03-22
The biogeochemical composition of stream water and the surrounding riparian water is mainly defined by the exchange of water and solutes between the stream and the riparian zone. Short-term fluctuations in near stream hydraulic head gradients (e.g., during stream flow events) can significantly influence the extent and rate of exchange processes. In this study, we simulate exchanges between streams and their riparian zone driven by stream stage fluctuations during single stream discharge events of varying peak height and duration. Simulated results show that strong stream flow events can trigger solute mobilization in riparian soils and subsequent export to the stream. The timing and amount of solute export is linked to the shape of the discharge event. Higher peaks and increased durations significantly enhance solute export, however, peak height is found to be the dominant control for overall mass export. Mobilized solutes are transported to the stream in two stages (1) by return flow of stream water that was stored in the riparian zone during the event and (2) by vertical movement to the groundwater under gravity drainage from the unsaturated parts of the riparian zone, which lasts for significantly longer time (> 400 days) resulting in long tailing of bank outflows and solute mass outfluxes. We conclude that strong stream discharge events can mobilize and transport solutes from near stream riparian soils into the stream. The impact of short-term stream discharge variations on solute exchange may last for long times after the flow event. © 2018, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.
2016-12-01
Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.
Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening
NASA Astrophysics Data System (ADS)
Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.
2017-12-01
Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.
Passman, Rod S; Rogers, John D; Sarkar, Shantanu; Reiland, Jerry; Reisfeld, Erin; Koehler, Jodi; Mittal, Suneet
2017-07-01
Undersensing of premature ventricular beats and low-amplitude R waves are primary causes for inappropriate bradycardia and pause detections in insertable cardiac monitors (ICMs). The purpose of this study was to develop and validate an enhanced algorithm to reduce inappropriately detected bradycardia and pause episodes. Independent data sets to develop and validate the enhanced algorithm were derived from a database of ICM-detected bradycardia and pause episodes in de-identified patients monitored for unexplained syncope. The original algorithm uses an auto-adjusting sensitivity threshold for R-wave sensing to detect tachycardia and avoid T-wave oversensing. In the enhanced algorithm, a second sensing threshold is used with a long blanking and fixed lower sensitivity threshold, looking for evidence of undersensed signals. Data reported includes percent change in appropriate and inappropriate bradycardia and pause detections as well as changes in episode detection sensitivity and positive predictive value with the enhanced algorithm. The validation data set, from 663 consecutive patients, consisted of 4904 (161 patients) bradycardia and 2582 (133 patients) pause episodes, of which 2976 (61%) and 996 (39%) were appropriately detected bradycardia and pause episodes. The enhanced algorithm reduced inappropriate bradycardia and pause episodes by 95% and 47%, respectively, with 1.7% and 0.6% reduction in appropriate episodes, respectively. The average episode positive predictive value improved by 62% (P < .001) for bradycardia detection and by 26% (P < .001) for pause detection, with an average relative sensitivity of 95% (P < .001) and 99% (P = .5), respectively. The enhanced dual sense algorithm for bradycardia and pause detection in ICMs substantially reduced inappropriate episode detection with a minimal reduction in true episode detection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Using "residual depths" to monitor pool depths independently of discharge
Thomas E. Lisle
1987-01-01
As vital components of habitat for stream fishes, pools are often monitored to follow the effects of enhancement projects and natural stream processes. Variations of water depth with discharge, however, can complicate monitoring changes in the depth and volume of pools. To subtract the effect of discharge on depth in pools, residual depths can be measured. Residual...
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
Standard random number generation for MBASIC
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.
Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan
2007-02-01
The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.
Preston, Stephen D.; Alexander, Richard B.; Woodside, Michael D.
2011-01-01
The U.S. Geological Survey (USGS) recently completed assessments of stream nutrients in six major regions extending over much of the conterminous United States. SPARROW (SPAtially Referenced Regressions On Watershed attributes) models were developed for each region to explain spatial patterns in monitored stream nutrient loads in relation to human activities and natural resources and processes. The model information, reported by stream reach and catchment, provides contrasting views of the spatial patterns of nutrient source contributions, including those from urban (wastewater effluent and diffuse runoff from developed land), agricultural (farm fertilizers and animal manure), and specific background sources (atmospheric nitrogen deposition, soil phosphorus, forest nitrogen fixation, and channel erosion).
NASA Astrophysics Data System (ADS)
Golly, Antonius; Turowski, Jens
2017-04-01
The width of fluvial streams and channel beds is an important metric for a large number of hydraulic, geomorphic and ecologic applications. For example, for a given discharge the local channel width determines the water flow velocity and thus the sediment transport capacity of a reach. Since streams often have irregular shapes with uneven channel banks, the channel width strongly varies along the channel. Although, the geometry of streams or their beds can be measured easily in the field (e.g. with a Total Station or GPS) or from maps or aerial images in a GIS, the width of the stream cannot be identified objectively without further data processing, since the results are more or less irregular polygons with sometimes bended shapes. An objective quantification of the channel width and other metrics requires automated algorithms that are applicable over a range of channel shapes and spatial scales. Here, we present a lightweight software suite with a small number of functions that process 2D or 3D geometrical data of channels or channel beds. The software, written as an R-package, accepts various text data formats and can be configured through five parameters. It creates interactive overview plots (if desired) and produces three basic channel metrics: the centerline, the channel width along the centerline and the slope along the centerline. The centerline is an optimized line that minimizes the distances to both channel banks. This centerline gives also a measure for the real length and slope of the channel. From this centerline perpendicular transects are generated which allow for the calculation of the channel width where they intersect with the channel banks. Briefly, we present an example and demonstrate the importance of these metrics in a use case of a steep stream, the Erlenbach stream in Switzerland. We were motivated to develop and publish the algorithm in an open-source framework, since only proprietary solutions were available at that time. The software is developed in R and is published under GNU GPL meaning it is free to use, edit and copy. This makes the software available also to users who do not own a MATLAB or ARCMAP license for which similar products exist.
NASA Astrophysics Data System (ADS)
Weiss, Jake; Newberg, Heidi Jo; Arsenault, Matthew; Bechtel, Torrin; Desell, Travis; Newby, Matthew; Thompson, Jeffery M.
2016-01-01
Statistical photometric parallax is a method for using the distribution of absolute magnitudes of stellar tracers to statistically recover the underlying density distribution of these tracers. In previous work, statistical photometric parallax was used to trace the Sagittarius Dwarf tidal stream, the so-called bifurcated piece of the Sagittaritus stream, and the Virgo Overdensity through the Milky Way. We use an improved knowledge of this distribution in a new algorithm that accounts for the changes in the stellar population of color-selected stars near the photometric limit of the Sloan Digital Sky Survey (SDSS). Although we select bluer main sequence turnoff stars (MSTO) as tracers, large color errors near the survey limit cause many stars to be scattered out of our selection box and many fainter, redder stars to be scattered into our selection box. We show that we are able to recover parameters for analogues of these streams in simulated data using a maximum likelihood optimization on MilkyWay@home. We also present the preliminary results of fitting the density distribution of major Milky Way tidal streams in SDSS data. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.
Understanding the Milky Way Halo through Large Surveys
NASA Astrophysics Data System (ADS)
Koposov, Sergey
This thesis presents an extensive study of stellar substructure in the outskirts of the Milky Way(MW), combining data mining of SDSS with theoretical modeling. Such substructure, either bound star clusters and satellite galaxies, or tidally disrupted objects forming stellar streams are powerful diagnostics of the Milky Way's dynamics and formation history. I have developed an algorithmic technique of searching for stellar overdensities in the MW halo, based on SDSS catalogs. This led to the discovery of unusual ultra-faint ~ (1000Lsun) globular clusters with very compact sizes and relaxation times << t_Hubble. The detailed analysis of a known stellar stream (GD-1), allowed me to make the first 6-D phase space map for such an object along 60 degrees on the sky. By modeling the stream's orbit I could place strong constraints on the Galactic potential, e.g. Vcirc(R0)= 224+/-13 km/s. The application of the algorithmic search for stellar overdensities to the SDSS dataset and to mock datasets allowed me to quantify SDSS's severe radial incompleteness in its search for ultra-faint dwarf galaxies and to determine the luminosity function of MW satellites down to luminosities of M_V ~ -3. I used the semi-analytical model in order to compare the CDM model predictions for the MW satellite population with the observations; this comparison has shown that the recently increased census of MW satellites, better understanding of the radial incompleteness and the suppression of star formation after the reionization can fully solve the "Missing satellite problem".
Toward an Objective Enhanced-V Detection Algorithm
NASA Technical Reports Server (NTRS)
Brunner, Jason; Feltz, Wayne; Moses, John; Rabin, Robert; Ackerman, Steven
2007-01-01
The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V' signature, has been observed to occur during and preceding severe weather in previous studies. This study describes an algorithmic approach to objectively detect enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of cross correlation statistics of pixels and thresholds of enhanced-V quantitative parameters. The effectiveness of the enhanced-V detection method will be examined using Geostationary Operational Environmental Satellite, MODerate-resolution Imaging Spectroradiometer, and Advanced Very High Resolution Radiometer image data from case studies in the 2003-2006 seasons. The main goal of this study is to develop an objective enhanced-V detection algorithm for future implementation into operations with future sensors, such as GOES-R.
Shuttle radar DEM hydrological correction for erosion modelling in small catchments
NASA Astrophysics Data System (ADS)
Jarihani, Ben; Sidle, Roy; Bartley, Rebecca
2016-04-01
Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Catchment and hillslope scale runoff and sediment processes (i.e., patterns of overland flow, infiltration, subsurface stormflow and erosion) are all topographically mediated. In remote and data-scarce regions, high resolution DEMs (LiDAR) are often not available, and moderate to course resolution digital elevation models (e.g., SRTM) have difficulty replicating detailed hydrological patterns, especially in relatively flat landscapes. Several surface reconditioning algorithms (e.g., Smoothing) and "Stream burning" techniques (e.g., Agree or ANUDEM), in conjunction with representation of the known stream networks, have been used to improve DEM performance in replicating known hydrology. Detailed stream network data are not available at regional and national scales, but can be derived at local scales from remotely-sensed data. This research explores the implication of high resolution stream network data derived from Google Earth images for DEM hydrological correction, instead of using course resolution stream networks derived from topographic maps. The accuracy of implemented method in producing hydrological-efficient DEMs were assessed by comparing the hydrological parameters derived from modified DEMs and limited high-resolution airborne LiDAR DEMs. The degree of modification is dominated by the method used and availability of the stream network data. Although stream burning techniques improve DEMs hydrologically, these techniques alter DEM characteristics that may affect catchment boundaries, stream position and length, as well as secondary terrain derivatives (e.g., slope, aspect). Modification of a DEM to better reflect known hydrology can be useful, however, knowledge of the magnitude and spatial pattern of the changes are required before using a DEM for subsequent analyses.
Comparison of drinking water treatment process streams for optimal bacteriological water quality.
Ho, Lionel; Braun, Kalan; Fabris, Rolando; Hoefel, Daniel; Morran, Jim; Monis, Paul; Drikas, Mary
2012-08-01
Four pilot-scale treatment process streams (Stream 1 - Conventional treatment (coagulation/flocculation/dual media filtration); Stream 2 - Magnetic ion exchange (MIEX)/Conventional treatment; Stream 3 - MIEX/Conventional treatment/granular activated carbon (GAC) filtration; Stream 4 - Microfiltration/nanofiltration) were commissioned to compare their effectiveness in producing high quality potable water prior to disinfection. Despite receiving highly variable source water quality throughout the investigation, each stream consistently reduced colour and turbidity to below Australian Drinking Water Guideline levels, with the exception of Stream 1 which was difficult to manage due to the reactive nature of coagulation control. Of particular interest was the bacteriological quality of the treated waters where flow cytometry was shown to be the superior monitoring tool in comparison to the traditional heterotrophic plate count method. Based on removal of total and active bacteria, the treatment process streams were ranked in the order: Stream 4 (average log removal of 2.7) > Stream 2 (average log removal of 2.3) > Stream 3 (average log removal of 1.5) > Stream 1 (average log removal of 1.0). The lower removals in Stream 3 were attributed to bacteria detaching from the GAC filter. Bacterial community analysis revealed that the treatments affected the bacteria present, with the communities in streams incorporating conventional treatment clustering with each other, while the community composition of Stream 4 was very different to those of Streams 1, 2 and 3. MIEX treatment was shown to enhance removal of bacteria due to more efficient flocculation which was validated through the novel application of the photometric dispersion analyser. Copyright © 2012 Elsevier Ltd. All rights reserved.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
Application of the EM algorithm to radiographic images.
Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J
1992-01-01
The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
40 CFR 61.355 - Test methods, procedures, and compliance provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... entering an enhanced biodegradation unit, as defined in § 61.348(b)(2)(ii)(B), shall not be included in the... are met: (i) The benzene concentration for each waste stream entering the enhanced biodegradation unit...
40 CFR 61.355 - Test methods, procedures, and compliance provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... entering an enhanced biodegradation unit, as defined in § 61.348(b)(2)(ii)(B), shall not be included in the... are met: (i) The benzene concentration for each waste stream entering the enhanced biodegradation unit...
40 CFR 61.355 - Test methods, procedures, and compliance provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... entering an enhanced biodegradation unit, as defined in § 61.348(b)(2)(ii)(B), shall not be included in the... are met: (i) The benzene concentration for each waste stream entering the enhanced biodegradation unit...
40 CFR 61.355 - Test methods, procedures, and compliance provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... entering an enhanced biodegradation unit, as defined in § 61.348(b)(2)(ii)(B), shall not be included in the... are met: (i) The benzene concentration for each waste stream entering the enhanced biodegradation unit...
40 CFR 61.355 - Test methods, procedures, and compliance provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... entering an enhanced biodegradation unit, as defined in § 61.348(b)(2)(ii)(B), shall not be included in the... are met: (i) The benzene concentration for each waste stream entering the enhanced biodegradation unit...
How does modifying a DEM to reflect known hydrology affect subsequent terrain analysis?
NASA Astrophysics Data System (ADS)
Callow, John Nikolaus; Van Niel, Kimberly P.; Boggs, Guy S.
2007-01-01
SummaryMany digital elevation models (DEMs) have difficulty replicating hydrological patterns in flat landscapes. Efforts to improve DEM performance in replicating known hydrology have included a variety of soft (i.e. algorithm-based approaches) and hard techniques, such as " Stream burning" or "surface reconditioning" (e.g. Agree or ANUDEM). Using a representation of the known stream network, these methods trench or mathematically warp the original DEM to improve how accurately stream position, stream length and catchment boundaries replicate known hydrological conditions. However, these techniques permanently alter the DEM and may affect further analyses (e.g. slope). This paper explores the impact that commonly used hydrological correction methods ( Stream burning, Agree.aml and ANUDEM v4.6.3 and ANUDEM v5.1) have on the overall nature of a DEM, finding that different methods produce non-convergent outcomes for catchment parameters (such as catchment boundaries, stream position and length), and differentially compromise secondary terrain analysis. All hydrological correction methods successfully improved calculation of catchment area, stream position and length as compared to using the DEM without any modification, but they all increased catchment slope. No single method performing best across all categories. Different hydrological correction methods changed elevation and slope in different spatial patterns and magnitudes, compromising the ability to derive catchment parameters and conduct secondary terrain analysis from a single DEM. Modification of a DEM to better reflect known hydrology can be useful, however knowledge of the magnitude and spatial pattern of the changes are required before using a DEM for subsequent analyses.
NASA Astrophysics Data System (ADS)
Guliyev, Ayyub; Nabiyev, Shaig
2017-07-01
This paper presents the results of a statistical analysis of the dynamic parameters of 300 comets that have osculating hyperbolic orbits. It is shown that such comets differ from other comets by their large perihelion distances and by a predominance of retrograde motion. It is shown that the values of i, the inclination of the hyperbolic comets, are in comparative excess over the interval 90-120°. The dominance by q, the perihelion distance, renders it difficult to suggest that the excess hyperbolic velocity of these comets can be the result of physical processes that take place in their nuclei. Aspects of the following working hypothesis, that the hyperbolic excess of parameter e might be formed after comets pass through meteoroid streams, are also studied. To evaluate this hypothesis, the distribution of the orbits of hyperbolic comets relative to the plane of motion of 112 established meteoroid streams are analyzed. The number (N) of orbit nodes for hyperbolic comets with respect to the plane of each stream at various distances is calculated. To determine the degree of redundancy of N, a special computing algorithm was applied that provided the expected value nav as well as the standard deviation σ for the number of cometary nodes at the plane of each stream. A comparative analysis of the N and nav values that take σ into account suggests an excess in 40 stream cases. This implies that the passage of comets through meteoroid streams can lead to an acceleration of the comets' heliocentric velocity.
Dynamic Floodplain representation in hydrologic flood forecasting using WRF-Hydro modeling framework
NASA Astrophysics Data System (ADS)
Gangodagamage, C.; Li, Z.; Maitaria, K.; Islam, M.; Ito, T.; Dhondia, J.
2016-12-01
Floods claim more lives and damage more property than any other category of natural disaster in the Continental United States. A system that can demarcate local flood boundaries dynamically could help flood prone communities prepare for and even prevent from catastrophic flood events. Lateral distance from the centerline of the river to the right and left floodplains for the water levels coming out of the models at each grid location have not been properly integrated with the national hydrography dataset (NHDPlus). The NHDPlus dataset represents the stream network with feature classes such as rivers, tributaries, canals, lakes, ponds, dams, coastlines, and stream gages. The NHDPlus dataset consists of approximately 2.7 million river reaches defining how surface water drains to the ocean. These river reaches have upstream and downstream nodes and basic parameters such as flow direction, drainage area, reach slope etc. We modified an existing algorithm (Gangodagamage et al., 2007) to provide lateral distance from the centerline of the river to the right and left floodplains for the flows simulated by models. Previous work produced floodplain boundaries for static river stages (i.e. 3D metric: distance along the main stem, flow depth, lateral distance from river center line). Our new approach introduces the floodplain boundary for variable water levels at each reach with the fourth dimension, time. We use modeled flows from WRF-Hydro and demarcate the right and left lateral boundaries of inundation dynamically by appropriately mapping discharges into hydraulically corrected stages. Backwater effects from the mainstem to tributaries are considered and proper corrections are applied for the tributary inundations. We obtained river stages by optimizing reach level channel parameters using newly developed stream flow routing algorithm. Non uniform inundations are mapped at each NHDplus reach (upstream and downstream nodes) and spatial interpolation is carried out on a normalized digital elevation model (always streams are at zero elevations) to obtain the smooth flood boundaries between adjacent reaches. The validation of the dynamic inundation boundaries is performed using multi-temporal satellite datasets as well as HEC-RAS hydrodynamic model results for selected streams for previous flood events.
Accessing eSDO Solar Image Processing and Visualization through AstroGrid
NASA Astrophysics Data System (ADS)
Auden, E.; Dalla, S.
2008-08-01
The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.
Detection of person borne IEDs using multiple cooperative sensors
NASA Astrophysics Data System (ADS)
MacIntosh, Scott; Deming, Ross; Hansen, Thorkild; Kishan, Neel; Tang, Ling; Shea, Jing; Lang, Stephen
2011-06-01
The use of multiple cooperative sensors for the detection of person borne IEDs is investigated. The purpose of the effort is to evaluate the performance benefits of adding multiple sensor data streams into an aided threat detection algorithm, and a quantitative analysis of which sensor data combinations improve overall detection performance. Testing includes both mannequins and human subjects with simulated suicide bomb devices of various configurations, materials, sizes and metal content. Aided threat recognition algorithms are being developed to test detection performance of individual sensors against combined fused sensors inputs. Sensors investigated include active and passive millimeter wave imaging systems, passive infrared, 3-D profiling sensors and acoustic imaging. The paper describes the experimental set-up and outlines the methodology behind a decision fusion algorithm-based on the concept of a "body model".
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.; ...
2016-02-10
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
Learning to forget: continual prediction with LSTM.
Gers, F A; Schmidhuber, J; Cummins, F
2000-10-01
Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.
Holtgrieve, Gordon W; Schindler, Daniel E
2011-02-01
In coastal areas of the North Pacific Ocean, annual returns of spawning salmon provide a substantial influx of nutrients and organic matter to streams and are generally believed to enhance the productivity of recipient ecosystems. Loss of this subsidy from areas with diminished salmon runs has been hypothesized to limit ecosystem productivity in juvenile salmon rearing habitats (lakes and streams), thereby reinforcing population declines. Using five to seven years of data from an Alaskan stream supporting moderate salmon densities, we show that salmon predictably increased stream water nutrient concentrations, which were on average 190% (nitrogen) and 390% (phosphorus) pre-salmon values, and that primary producers incorporated some of these nutrients into tissues. However, benthic algal biomass declined by an order of magnitude despite increased nutrients. We also measured changes in stream ecosystem metabolic properties, including gross primary productivity (GPP) and ecosystem respiration (ER), from three salmon streams by analyzing diel measurements of oxygen concentrations and stable isotopic ratios (delta O-O2) within a Bayesian statistical model of oxygen dynamics. Our results do not support a shift toward higher primary productivity with the return of salmon, as is expected from a nutrient fertilization mechanism. Rather, net ecosystem metabolism switched from approximately net autotrophic (GPP > or = ER) to a strongly net heterotrophic state (GPP < ER) in response to bioturbation of benthic habitats by salmon. Following the seasonal arrival of salmon, GPP declined to <12% of pre-salmon rates, while ER increased by over threefold. Metabolism by live salmon could not account for the observed increase in ER early in the salmon run, suggesting salmon nutrients and disturbance enhanced in situ heterotrophic respiration. Salmon also changed the physical properties of the stream, increasing air-water gas exchange by nearly 10-fold during peak spawning. We suggest that management efforts to restore salmon ecosystems should consider effects on ecosystem metabolic properties and how salmon disturbance affects the incorporation of marine-derived nutrients into food webs.
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
Trapping of Embolic Particles in a Vessel Phantom by Cavitation-Enhanced Acoustic Streaming
Maxwell, Adam D.; Park, Simone; Vaughan, Benjamin L.; Cain, Charles A.; Grotberg, James B.; Xu, Zhen
2014-01-01
Cavitation clouds generated by short, high-amplitude, focused ultrasound pulses were previously observed to attract, trap, and erode thrombus fragments in a vessel phantom. This phenomenon may offer a noninvasive method to capture and eliminate embolic fragments flowing through the bloodstream during a cardiovascular intervention. In this article, the mechanism of embolus trapping was explored by particle image velocimetry (PIV). PIV was used to examine the fluid streaming patterns generated by ultrasound in a vessel phantom with and without crossflow of blood-mimicking fluid. Cavitation enhanced streaming, which generated fluid vortices adjacent to the focus. The focal streaming velocity, uf, was as high as 120 cm/s, while mean crossflow velocities, uc, were imposed up to 14 cm/s. When a solid particle 3-4 mm diameter was introduced into crossflow, it was trapped near the focus. Increasing uf promoted particle trapping while increasing uc promoted particle escape. The maximum crossflow Reynolds number at which particles could be trapped, Rec, was approximately linear with focal streaming number, Ref, i.e. Rec = 0.25Ref + 67.44 (R2=0.76) corresponding to dimensional velocities uc=0.084uf + 3.122 for 20 < uf < 120 cm/s. The fluidic pressure map was estimated from PIV and indicated a negative pressure gradient towards the focus, trapping the embolus near this location. PMID:25109407
Practical Meteor Stream Forecasting
NASA Technical Reports Server (NTRS)
Cooke, William J.; Suggs, Robert M.
2003-01-01
Inspired by the recent Leonid meteor storms, researchers have made great strides in our ability to predict enhanced meteor activity. However, the necessary calibration of the meteor stream models with Earth-based ZHRs (Zenith Hourly Rates) has placed emphasis on the terran observer and meteor activity predictions are published in such a manner to reflect this emphasis. As a consequence, many predictions are often unusable by the satellite community, which has the most at stake and the greatest interest in meteor forecasting. This paper suggests that stream modelers need to pay more attention to the needs of this community and publish not just durations and times of maxima for Earth, but everything needed to characterize the meteor stream in and out of the plane of the ecliptic, which, at a minimum, consists of the location of maximum stream density (ZHR) and the functional form of the density decay with distance from this point. It is also suggested that some of the terminology associated with meteor showers may need to be more strictly defined in order to eliminate the perception of crying wolf by meteor scientists. An outburst is especially problematic, as it usually denotes an enhancement by a factor of 2 or more to researchers, but conveys the notion of a sky filled with meteors to satellite operators and the public. Experience has also taught that predicted ZHRs often lead to public disappointment, as these values vastly overestimate what is seen.
NASA Astrophysics Data System (ADS)
Bay, Annick; Mayer, Alexandre
2014-09-01
The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.
Multi-frame knowledge based text enhancement for mobile phone captured videos
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-02-01
In this study, we explore automated text recognition and enhancement using mobile phone captured videos of store receipts. We propose a method which includes Optical Character Resolution (OCR) enhanced by our proposed Row Based Multiple Frame Integration (RB-MFI), and Knowledge Based Correction (KBC) algorithms. In this method, first, the trained OCR engine is used for recognition; then, the RB-MFI algorithm is applied to the output of the OCR. The RB-MFI algorithm determines and combines the most accurate rows of the text outputs extracted by using OCR from multiple frames of the video. After RB-MFI, KBC algorithm is applied to these rows to correct erroneous characters. Results of the experiments show that the proposed video-based approach which includes the RB-MFI and the KBC algorithm increases the word character recognition rate to 95%, and the character recognition rate to 98%.
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
Real-time Social Internet Data to Guide Forecasting Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Valle, Sara Y.
Our goal is to improve decision support by monitoring and forecasting events using social media, mathematical models, and quantifying model uncertainty. Our approach is real-time, data-driven forecasts with quantified uncertainty: Not just for weather anymore. Information flow from human observations of events through an Internet system and classification algorithms is used to produce quantitatively uncertain forecast. In summary, we want to develop new tools to extract useful information from Internet data streams, develop new approaches to assimilate real-time information into predictive models, validate approaches by forecasting events, and our ultimate goal is to develop an event forecasting system using mathematicalmore » approaches and heterogeneous data streams.« less
ERIC Educational Resources Information Center
Cummine, Jacqueline; Gould, Layla; Zhou, Crystal; Hrybouski, Stan; Siddiqi, Zohaib; Chouinard, Brea; Borowsky, Ron
2013-01-01
Neurobiology of reading research has yet to explore whether reliance on the ventral-lexical stream during word reading can be enhanced by the instructed reading strategy, or whether it is impervious to such strategies. We examined Instructions: "name all" vs. "name words" (based on spelling), Word Type: "regular words" vs. "exception words", and…
Daniele Tonina; John M. Buffington
2011-01-01
Hyporheic flow results from the interaction between streamflow and channel morphology and is an important component of stream ecosystems because it enhances water and solute exchange between the river and its bed. Hyporheic flow in pool-riffle channels is particularly complex because of three-dimensional topography that spans a range of partially to fully submerged...
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.
2016-04-01
Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.
Banerjee, Arindam; Ghosh, Joydeep
2004-05-01
Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.
Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R
2016-12-01
Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby "cleaning" such data for future modeling. 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data [heart rate (HR), respiratory rate (RR), peripheral arterial oxygen saturation (SpO 2 ) at 1/20 Hz, and noninvasive oscillometric blood pressure (BP)]. Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. Block 1 yielded 812 VS events, with 214 (26 %) judged by experts as artifact (RR 43 %, SpO 2 40 %, BP 15 %, HR 2 %). ML algorithms applied to the Block 1 training/cross-validation set (tenfold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO 2 . Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO 2 . ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.
Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R.
2015-01-01
PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building. PMID:26438655
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai
2014-01-01
We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats and macroinvertebrate communities of restored streams would resemble those of natural streams, while those of the channelized streams would differ from both restored and near-natural streams. Physical habitats were surveyed for substrate composition, depth, width and current velocity. Macroinvertebrates were sampled along 100 m reaches in each stream, in edge habitats and in riffle/run habitats located in the center of the stream. Restoration significantly altered the physical conditions and affected the interactions between stream habitat heterogeneity and macroinvertebrate diversity. The substrate in the restored streams was dominated by pebble, whereas the substrate in the channelized and natural streams was dominated by sand. In the natural streams a relationship was identified between slope and pebble/gravel coverage, indicating a coupling of energy and substrate characteristics. Such a relationship did not occur in the channelized or in the restored streams where placement of large amounts of pebble/gravel distorted the natural relationship. The analyses revealed, a direct link between substrate heterogeneity and macroinvertebrate diversity in the natural streams. A similar relationship was not found in either the channelized or the restored streams, which we attribute to a de-coupling of the natural relationship between benthic community diversity and physical habitat diversity. Our study results suggest that restoration schemes should aim at restoring the natural physical structural complexity in the streams and at the same time enhance the possibility of re-generating the natural geomorphological processes sustaining the habitats in streams and rivers. Documentation of restoration efforts should be intensified with continuous monitoring of geomorphological and ecological changes including surveys of reference river systems.
Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai
2014-01-01
We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats and macroinvertebrate communities of restored streams would resemble those of natural streams, while those of the channelized streams would differ from both restored and near-natural streams. Physical habitats were surveyed for substrate composition, depth, width and current velocity. Macroinvertebrates were sampled along 100 m reaches in each stream, in edge habitats and in riffle/run habitats located in the center of the stream. Restoration significantly altered the physical conditions and affected the interactions between stream habitat heterogeneity and macroinvertebrate diversity. The substrate in the restored streams was dominated by pebble, whereas the substrate in the channelized and natural streams was dominated by sand. In the natural streams a relationship was identified between slope and pebble/gravel coverage, indicating a coupling of energy and substrate characteristics. Such a relationship did not occur in the channelized or in the restored streams where placement of large amounts of pebble/gravel distorted the natural relationship. The analyses revealed, a direct link between substrate heterogeneity and macroinvertebrate diversity in the natural streams. A similar relationship was not found in either the channelized or the restored streams, which we attribute to a de-coupling of the natural relationship between benthic community diversity and physical habitat diversity. Our study results suggest that restoration schemes should aim at restoring the natural physical structural complexity in the streams and at the same time enhance the possibility of re-generating the natural geomorphological processes sustaining the habitats in streams and rivers. Documentation of restoration efforts should be intensified with continuous monitoring of geomorphological and ecological changes including surveys of reference river systems. PMID:25264627
Estimation of potential maximum biomass of trout in Wyoming streams to assist management decisions
Hubert, W.A.; Marwitz, T.D.; Gerow, K.G.; Binns, N.A.; Wiley, R.W.
1996-01-01
Fishery managers can benefit from knowledge of the potential maximum biomass (PMB) of trout in streams when making decisions on the allocation of resources to improve fisheries. Resources are most likely to he expended on streams with high PMB and with large differences between PMB and currently measured biomass. We developed and tested a model that uses four easily measured habitat variables to estimate PMB (upper 90th percentile of predicted mean bid mass) of trout (Oncorhynchus spp., Salmo trutta, and Salvelinus fontinalis) in Wyoming streams. The habitat variables were proportion of cover, elevation, wetted width, and channel gradient. The PMB model was constructed from data on 166 stream reaches throughout Wyoming and validated on an independent data set of 50 stream reaches. Prediction of PMB in combination with estimation of current biomass and information on habitat quality can provide managers with insight into the extent to which management actions may enhance trout biomass.
Newton, Michael; Ice, George
2016-01-01
Forested riparian buffers isolate streams from the influence of harvesting operations that can lead to water temperature increases. Only forest cover between the sun and stream limits stream warming, but that cover also reduces in-stream photosynthesis, aquatic insect production, and fish productivity. Water temperature increases that occur as streams flow through canopy openings decrease rapidly downstream, in as little as 150 m. Limiting management options in riparian forests restricts maintenance and optimization of various buffer contributions to beneficial uses, including forest products, fish, and their food supply. Some riparian disturbance, especially along cold streams, appears to benefit fish productivity. Options for enhancing environmental investments in buffers should include flexibility in application of water quality standards to address the general biological needs of fish and temporary nature of clearing induced warming. Local prescriptions for optimizing riparian buffers and practices that address long-term habitat needs deserve attention. Options and incentives are needed to entice landowners to actively manage for desirable riparian forest conditions.
Woodward, Emily; Hladik, Michelle; Kolpin, Dana W.
2016-01-01
Nitrapyrin is a bactericide that is co-applied with fertilizer to prevent nitrification and enhance corn yields. While there have been studies of the environmental fate of nitrapyrin, there is no documentation of its off-field transport to streams. In 2016, 59 water samples from 11 streams across Iowa were analyzed for nitrapyrin and its degradate, 6-chloropicolinic acid (6-CPA), along with three widely used herbicides, acetochlor, atrazine, and metolachlor. Nitrapyrin was detected in seven streams (39% of water samples) with concentrations ranging from 12 to 240 ng/L; 6-CPA was never detected. The herbicides were ubiquitously detected (100% of samples, 28–16000 ng/L). Higher nitrapyrin concentrations in streams were associated with rainfall events following spring fertilizer applications. Nitrapyrin persisted in streams for up to 5 weeks. These results highlight the need for more research focused on the environmental fate and transport of nitrapyrin and the potential toxicity this compound could have on nontarget organisms.
Adewumi, Aderemi Oluyinka; Chetty, Sivashan
2017-01-01
The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA's results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems.
A Control Chart Approach for Representing and Mining Data Streams with Shape Based Similarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omitaomu, Olufemi A
The mining of data streams for online condition monitoring is a challenging task in several domains including (electric) power grid system, intelligent manufacturing, and consumer science. Considering a power grid application in which thousands of sensors, called the phasor measurement units, are deployed on the power grid network to continuously collect streams of digital data for real-time situational awareness and system management. Depending on design, each sensor could stream between ten and sixty data samples per second. The myriad of sensory data captured could convey deeper insights about sequence of events in real-time and before major damages are done. However,more » the timely processing and analysis of these high-velocity and high-volume data streams is a challenge. Hence, a new data processing and transformation approach, based on the concept of control charts, for representing sequence of data streams from sensors is proposed. In addition, an application of the proposed approach for enhancing data mining tasks such as clustering using real-world power grid data streams is presented. The results indicate that the proposed approach is very efficient for data streams storage and manipulation.« less
NASA Astrophysics Data System (ADS)
Pfeiffer, Andrew; Wohl, Ellen
2018-01-01
We used 48 reach-scale measurements of large wood and wood-associated sediment and coarse particulate organic matter (CPOM) storage within an 80 km2 catchment to examine spatial patterns of storage relative to stream order. Wood, sediment, and CPOM are not distributed uniformly across the drainage basin. Third- and fourth-order streams (23% of total stream length) disproportionately store wood and coarse and fine sediments: 55% of total wood volume, 78% of coarse sediment, and 49% of fine sediment, respectively. Fourth-order streams store 0.8 m3 of coarse sediment and 0.2 m3 of fine sediment per cubic meter of wood. CPOM storage is highest in first-order streams (60% of storage in 47% of total network stream length). First-order streams can store up to 0.3 m3 of CPOM for each cubic meter of wood. Logjams in third- and fourth-order reaches are primary sediment storage agents, whereas roots in small streams may be more important for storage of CPOM. We propose the large wood particulate storage index to quantify average volume of sediment or CPOM stored by a cubic meter of wood.
Habitat associations of age-0 cutthroat trout in a spring stream improved for adult salmonids
Hubert, W.A.; Joyce, M.P.
2005-01-01
Native cutthroat trout (Oncorhynchus clarki) in the Snake River watershed use streams formed by large springs for spawning and nursery habitat. Several spring streams have been modified to enhance abundance of adult salmonids, but the habitat associations of age-0 cutthroat trout in these systems are undescribed. We assessed the frequency of collection of age-0 cutthroat trout in riffles, riffle margins, pool margins, and backwaters from late June to the middle of August 2000 in a spring stream with such modifications. The proportion of sites in which age-0 cutthroat trout were collected increased up to the middle of July and then decreased. We found substantially lower frequencies of collection of age-0 cutthroat trout in riffles compared to the three stream-margin habitat types. Age-0 cutthroat trout appeared to select shallow, low-velocity, stream-margin habitat with cover that provided protection from piscivorous adult salmonids and avian predators. Our observations suggest that modification of spring streams for production of cutthroat trout should include efforts to manage stream margins so they provide cover in the form of aquatic macrophytes or overhanging vegetation for age-0 fish.
Visual analytics of anomaly detection in large data streams
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.; Sharma, Ratnesh K.; Mehta, Abhay
2009-01-01
Most data streams usually are multi-dimensional, high-speed, and contain massive volumes of continuous information. They are seen in daily applications, such as telephone calls, retail sales, data center performance, and oil production operations. Many analysts want insight into the behavior of this data. They want to catch the exceptions in flight to reveal the causes of the anomalies and to take immediate action. To guide the user in finding the anomalies in the large data stream quickly, we derive a new automated neighborhood threshold marking technique, called AnomalyMarker. This technique is built on cell-based data streams and user-defined thresholds. We extend the scope of the data points around the threshold to include the surrounding areas. The idea is to define a focus area (marked area) which enables users to (1) visually group the interesting data points related to the anomalies (i.e., problems that occur persistently or occasionally) for observing their behavior; (2) discover the factors related to the anomaly by visualizing the correlations between the problem attribute with the attributes of the nearby data items from the entire multi-dimensional data stream. Mining results are quickly presented in graphical representations (i.e., tooltip) for the user to zoom into the problem regions. Different algorithms are introduced which try to optimize the size and extent of the anomaly markers. We have successfully applied this technique to detect data stream anomalies in large real-world enterprise server performance and data center energy management.
NASA Astrophysics Data System (ADS)
Caldwell, R. J.; Gangopadhyay, S.; Bountry, J.; Lai, Y.; Elsner, M. M.
2013-07-01
Management of water temperatures in the Columbia River Basin (Washington) is critical because water projects have substantially altered the habitat of Endangered Species Act listed species, such as salmon, throughout the basin. This is most important in tributaries to the Columbia, such as the Methow River, where the spawning and rearing life stages of these cold water fishes occurs. Climate change projections generally predict increasing air temperatures across the western United States, with less confidence regarding shifts in precipitation. As air temperatures rise, we anticipate a corresponding increase in water temperatures, which may alter the timing and availability of habitat for fish reproduction and growth. To assess the impact of future climate change in the Methow River, we couple historical climate and future climate projections with a statistical modeling framework to predict daily mean stream temperatures. A K-nearest neighbor algorithm is also employed to: (i) adjust the climate projections for biases compared to the observed record and (ii) provide a reference for performing spatiotemporal disaggregation in future hydraulic modeling of stream habitat. The statistical models indicate the primary drivers of stream temperature are maximum and minimum air temperature and stream flow and show reasonable skill in predictability. When compared to the historical reference time period of 1916-2006, we conclude that increases in stream temperature are expected to occur at each subsequent time horizon representative of the year 2020, 2040, and 2080, with an increase of 0.8 ± 1.9°C by the year 2080.
EDMC: An enhanced distributed multi-channel anti-collision algorithm for RFID reader system
NASA Astrophysics Data System (ADS)
Zhang, YuJing; Cui, Yinghua
2017-05-01
In this paper, we proposes an enhanced distributed multi-channel reader anti-collision algorithm for RFID environments which is based on the distributed multi-channel reader anti-collision algorithm for RFID environments (called DiMCA). We proposes a monitor method to decide whether reader receive the latest control news after it selected the data channel. The simulation result shows that it improves interrogation delay.