Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
Ontology-Based Peer Exchange Network (OPEN)
ERIC Educational Resources Information Center
Dong, Hui
2010-01-01
In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…
Double hashing technique in closed hashing search process
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Zulkarnain, Iskandar; Jaya, Hendra
2017-09-01
The search process is used in various activities performed both online and offline, many algorithms that can be used to perform the search process one of which is a hash search algorithm, search process with hash search algorithm used in this study using double hashing technique where the data will be formed into the table with same length and then search, the results of this study indicate that the search process with double hashing technique allows faster searching than the usual search techniques, this research allows to search the solution by dividing the value into the main table and overflow table so that the search process is expected faster than the data stacked in the form of one table and collision data could avoided.
Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network
NASA Astrophysics Data System (ADS)
Oda, Akihiro; Nishi, Hiroaki
Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.
Intrusion detection using secure signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Trent Darnel; Haile, Jedediah
A method and device for intrusion detection using secure signatures comprising capturing network data. A search hash value, value employing at least one one-way function, is generated from the captured network data using a first hash function. The presence of a search hash value match in a secure signature table comprising search hash values and an encrypted rule is determined. After determining a search hash value match, a decryption key is generated from the captured network data using a second hash function, a hash function different form the first hash function. One or more of the encrypted rules of themore » secure signatures table having a hash value equal to the generated search hash value are then decrypted using the generated decryption key. The one or more decrypted secure signature rules are then processed for a match and one or more user notifications are deployed if a match is identified.« less
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.
Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng
2016-10-01
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.
Efficient hash tables for network applications.
Zink, Thomas; Waldvogel, Marcel
2015-01-01
Hashing has yet to be widely accepted as a component of hard real-time systems and hardware implementations, due to still existing prejudices concerning the unpredictability of space and time requirements resulting from collisions. While in theory perfect hashing can provide optimal mapping, in practice, finding a perfect hash function is too expensive, especially in the context of high-speed applications. The introduction of hashing with multiple choices, d-left hashing and probabilistic table summaries, has caused a shift towards deterministic DRAM access. However, high amounts of rare and expensive high-speed SRAM need to be traded off for predictability, which is infeasible for many applications. In this paper we show that previous suggestions suffer from the false precondition of full generality. Our approach exploits four individual degrees of freedom available in many practical applications, especially hardware and high-speed lookups. This reduces the requirement of on-chip memory up to an order of magnitude and guarantees constant lookup and update time at the cost of only minute amounts of additional hardware. Our design makes efficient hash table implementations cheaper, more predictable, and more practical.
Data Recovery of Distributed Hash Table with Distributed-to-Distributed Data Copy
NASA Astrophysics Data System (ADS)
Doi, Yusuke; Wakayama, Shirou; Ozaki, Satoshi
To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.
A new pre-classification method based on associative matching method
NASA Astrophysics Data System (ADS)
Katsuyama, Yutaka; Minagawa, Akihiro; Hotta, Yoshinobu; Omachi, Shinichiro; Kato, Nei
2010-01-01
Reducing the time complexity of character matching is critical to the development of efficient Japanese Optical Character Recognition (OCR) systems. To shorten processing time, recognition is usually split into separate preclassification and recognition stages. For high overall recognition performance, the pre-classification stage must both have very high classification accuracy and return only a small number of putative character categories for further processing. Furthermore, for any practical system, the speed of the pre-classification stage is also critical. The associative matching (AM) method has often been used for fast pre-classification, because its use of a hash table and reliance solely on logical bit operations to select categories makes it highly efficient. However, redundant certain level of redundancy exists in the hash table because it is constructed using only the minimum and maximum values of the data on each axis and therefore does not take account of the distribution of the data. We propose a modified associative matching method that satisfies the performance criteria described above but in a fraction of the time by modifying the hash table to reflect the underlying distribution of training characters. Furthermore, we show that our approach outperforms pre-classification by clustering, ANN and conventional AM in terms of classification accuracy, discriminative power and speed. Compared to conventional associative matching, the proposed approach results in a 47% reduction in total processing time across an evaluation test set comprising 116,528 Japanese character images.
Performance of hashed cache data migration schemes on multicomputers
NASA Technical Reports Server (NTRS)
Hiranandani, Seema; Saltz, Joel; Mehrotra, Piyush; Berryman, Harry
1991-01-01
After conducting an examination of several data-migration mechanisms which permit an explicit and controlled mapping of data to memory, a set of schemes for storage and retrieval of off-processor array elements is experimentally evaluated and modeled. All schemes considered have their basis in the use of hash tables for efficient access of nonlocal data. The techniques in question are those of hashed cache, partial enumeration, and full enumeration; in these, nonlocal data are stored in hash tables, so that the operative difference lies in the amount of memory used by each scheme and in the retrieval mechanism used for nonlocal data.
Bin-Hash Indexing: A Parallel Method for Fast Query Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng
2008-06-27
This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not bemore » resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.« less
Method and system for analyzing and classifying electronic information
McGaffey, Robert W.; Bell, Michael Allen; Kortman, Peter J.; Wilson, Charles H.
2003-04-29
A data analysis and classification system that reads the electronic information, analyzes the electronic information according to a user-defined set of logical rules, and returns a classification result. The data analysis and classification system may accept any form of computer-readable electronic information. The system creates a hash table wherein each entry of the hash table contains a concept corresponding to a word or phrase which the system has previously encountered. The system creates an object model based on the user-defined logical associations, used for reviewing each concept contained in the electronic information in order to determine whether the electronic information is classified. The data analysis and classification system extracts each concept in turn from the electronic information, locates it in the hash table, and propagates it through the object model. In the event that the system can not find the electronic information token in the hash table, that token is added to a missing terms list. If any rule is satisfied during propagation of the concept through the object model, the electronic information is classified.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
The Design and Emulation of a System Kernel for X-Tree,
1979-03-30
DECLASSIFICATIONIDOWNGRADING I SCHEDULE 16. DISTRIBUTION STATEMENT (ol this Report) Approved for public release; distributi.- ti4imited -; T ? A~ 17. DISTRIBUTION STATEMENT...level of the tPee. ManvIL different schemes for these additional interconnections have been Proposed. No final selection h-as set been made. Pic- tured...comFletion, 5_U (b) Irenoves the Process name from the hash table, (c) F’uts the PCB back on the FREEPCB aueue for later reuse# ( d) Goes L:ack to sleep . Ai
2012-09-01
relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software
SGFSC: speeding the gene functional similarity calculation based on hash tables.
Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia
2016-11-04
In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .
NASA Astrophysics Data System (ADS)
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-29
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
HIA: a genome mapper using hybrid index-based sequence alignment.
Choi, Jongpill; Park, Kiejung; Cho, Seong Beom; Chung, Myungguen
2015-01-01
A number of alignment tools have been developed to align sequencing reads to the human reference genome. The scale of information from next-generation sequencing (NGS) experiments, however, is increasing rapidly. Recent studies based on NGS technology have routinely produced exome or whole-genome sequences from several hundreds or thousands of samples. To accommodate the increasing need of analyzing very large NGS data sets, it is necessary to develop faster, more sensitive and accurate mapping tools. HIA uses two indices, a hash table index and a suffix array index. The hash table performs direct lookup of a q-gram, and the suffix array performs very fast lookup of variable-length strings by exploiting binary search. We observed that combining hash table and suffix array (hybrid index) is much faster than the suffix array method for finding a substring in the reference sequence. Here, we defined the matching region (MR) is a longest common substring between a reference and a read. And, we also defined the candidate alignment regions (CARs) as a list of MRs that is close to each other. The hybrid index is used to find candidate alignment regions (CARs) between a reference and a read. We found that aligning only the unmatched regions in the CAR is much faster than aligning the whole CAR. In benchmark analysis, HIA outperformed in mapping speed compared with the other aligners, without significant loss of mapping accuracy. Our experiments show that the hybrid of hash table and suffix array is useful in terms of speed for mapping NGS sequencing reads to the human reference genome sequence. In conclusion, our tool is appropriate for aligning massive data sets generated by NGS sequencing.
2013-03-01
DSR Dynamic Source Routing DSSS Direct -sequence spread spectrum GUID Globally Unique ID MANET Mobile Ad-hoc Network NS3 Network Simulator 3 OLSR...networking schemes for safe maneuvering and data communication. Imagine needing to maintain an operational picture of an overall environment using a...as simple as O(n) where every node is sequentially queried to O log(n), or O(1). These schemes will be discussed with each individual DHT. Four of the
Churn-Resilient Replication Strategy for Peer-to-Peer Distributed Hash-Tables
NASA Astrophysics Data System (ADS)
Legtchenko, Sergey; Monnet, Sébastien; Sens, Pierre; Muller, Gilles
DHT-based P2P systems provide a fault-tolerant and scalable mean to store data blocks in a fully distributed way. Unfortunately, recent studies have shown that if connection/disconnection frequency is too high, data blocks may be lost. This is true for most current DHT-based system's implementations. To avoid this problem, it is necessary to build really efficient replication and maintenance mechanisms. In this paper, we study the effect of churn on an existing DHT-based P2P system such as DHash or PAST. We then propose solutions to enhance churn tolerance and evaluate them through discrete event simulations.
A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps
NASA Astrophysics Data System (ADS)
Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.
2017-06-01
The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.
HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual
1991-05-01
algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
2009-09-01
suffer the power and complexity requirements of a public key system. 28 In [18], a simulation of the SHA –1 algorithm is performed on a Xilinx FPGA ... 256 bits. Thus, the construction of a hash table would need 2512 independent comparisons. It is known that hash collisions of the SHA –1 algorithm... SHA –1 algorithm for small-core FPGA design. Small-core FPGA design is the process by which a circuit is adapted to use the minimal amount of logic
Efficient computation of hashes
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.
2014-06-01
The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.
High-performance sparse matrix-matrix products on Intel KNL and multicore architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagasaka, Y; Matsuoka, S; Azad, A
Sparse matrix-matrix multiplication (SpGEMM) is a computational primitive that is widely used in areas ranging from traditional numerical applications to recent big data analysis and machine learning. Although many SpGEMM algorithms have been proposed, hardware specific optimizations for multi- and many-core processors are lacking and a detailed analysis of their performance under various use cases and matrices is not available. We firstly identify and mitigate multiple bottlenecks with memory management and thread scheduling on Intel Xeon Phi (Knights Landing or KNL). Specifically targeting multi- and many-core processors, we develop a hash-table-based algorithm and optimize a heap-based shared-memory SpGEMM algorithm. Wemore » examine their performance together with other publicly available codes. Different from the literature, our evaluation also includes use cases that are representative of real graph algorithms, such as multi-source breadth-first search or triangle counting. Our hash-table and heap-based algorithms are showing significant speedups from libraries in the majority of the cases while different algorithms dominate the other scenarios with different matrix size, sparsity, compression factor and operation type. We wrap up in-depth evaluation results and make a recipe to give the best SpGEMM algorithm for target scenario. A critical finding is that hash-table-based SpGEMM gets a significant performance boost if the nonzeros are not required to be sorted within each row of the output matrix.« less
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
Tuple spaces in hardware for accelerated implicit routing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Zachary Kent; Tripp, Justin
2010-12-01
Organizing and optimizing data objects on networks with support for data migration and failing nodes is a complicated problem to handle as systems grow. The goal of this work is to demonstrate that high levels of speedup can be achieved by moving responsibility for finding, fetching, and staging data into an FPGA-based network card. We present a system for implicit routing of data via FPGA-based network cards. In this system, data structures are requested by name, and the network of FPGAs finds the data within the network and relays the structure to the requester. This is acheived through successive examinationmore » of hardware hash tables implemented in the FPGA. By avoiding software stacks between nodes, the data is quickly fetched entirely through FPGA-FPGA interaction. The performance of this system is orders of magnitude faster than software implementations due to the improved speed of the hash tables and lowered latency between the network nodes.« less
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
Optimal hash arrangement of tentacles in jellyfish
NASA Astrophysics Data System (ADS)
Okabe, Takuya; Yoshimura, Jin
2016-06-01
At first glance, the trailing tentacles of a jellyfish appear to be randomly arranged. However, close examination of medusae has revealed that the arrangement and developmental order of the tentacles obey a mathematical rule. Here, we show that medusa jellyfish adopt the best strategy to achieve the most uniform distribution of a variable number of tentacles. The observed order of tentacles is a real-world example of an optimal hashing algorithm known as Fibonacci hashing in computer science.
Performance-Oriented Privacy-Preserving Data Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pon, R K; Critchlow, T
2004-09-15
Current solutions to integrating private data with public data have provided useful privacy metrics, such as relative information gain, that can be used to evaluate alternative approaches. Unfortunately, they have not addressed critical performance issues, especially when the public database is very large. The use of hashes and noise yields better performance than existing techniques while still making it difficult for unauthorized entities to distinguish which data items truly exist in the private database. As we show here, leveraging the uncertainty introduced by collisions caused by hashing and the injection of noise, we present a technique for performing a relationalmore » join operation between a massive public table and a relatively smaller private one.« less
Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.
Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling
2018-04-01
In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.
Data Sharing in DHT Based P2P Systems
NASA Astrophysics Data System (ADS)
Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia
The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.
Field Acceptance and Nutritional Intake of the Meal, Ready-to-Eat and Heat and Serve Ration.
1998-05-01
Scrambled Eggs (Natick) and Scrambled Eggs w/Bacon(Natick). These two mean ratings were received from the control group. Table 4A Control Group H&S...6.4 Creamed Ground Beef 6.3 Corned Beef Hash 6.1 Western Scrambled Eggs (Natick)* 4.6 Scrambled Eggs w...5.4 20 38 33 * Natick - NRDEC developed eggs Table 4B Test Group H&S Breakfast Acceptability Ratings (n=39) Food Groups Food Item Entrees
Multiview alignment hashing for efficient image search.
Liu, Li; Yu, Mengyang; Shao, Ling
2015-03-01
Hashing is a popular and efficient method for nearest neighbor search in large-scale data spaces by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. For most hashing methods, the performance of retrieval heavily depends on the choice of the high-dimensional feature descriptor. Furthermore, a single type of feature cannot be descriptive enough for different images when it is used for hashing. Thus, how to combine multiple representations for learning effective hashing functions is an imminent task. In this paper, we present a novel unsupervised multiview alignment hashing approach based on regularized kernel nonnegative matrix factorization, which can find a compact representation uncovering the hidden semantics and simultaneously respecting the joint probability distribution of data. In particular, we aim to seek a matrix factorization to effectively fuse the multiple information sources meanwhile discarding the feature redundancy. Since the raised problem is regarded as nonconvex and discrete, our objective function is then optimized via an alternate way with relaxation and converges to a locally optimal solution. After finding the low-dimensional representation, the hashing functions are finally obtained through multivariable logistic regression. The proposed method is systematically evaluated on three data sets: 1) Caltech-256; 2) CIFAR-10; and 3) CIFAR-20, and the results show that our method significantly outperforms the state-of-the-art multiview hashing techniques.
Best-First Heuristic Search for Multicore Machines
2010-01-01
Otto, 1998) to implement an asynchronous version of PRA* that they call Hash Distributed A* ( HDA *). HDA * distributes nodes using a hash function in...nodes which are being communicated between peers are in transit. In contact with the authors of HDA *, we have created an implementation of HDA * for...Also, our implementation of HDA * allows us to make a fair comparison between algorithms by sharing common data structures such as priority queues and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Compact modalities for forward-error correction
NASA Astrophysics Data System (ADS)
Fang, Dejian
2013-10-01
Hash tables [1] must work. In fact, few leading analysts would disagree with the refinement of thin clients. In our research, we disprove not only that the infamous read-write algorithm for the exploration of object-oriented languages by W. White et al. is NP-complete, but that the same is true for the lookaside buffer.
NASA Astrophysics Data System (ADS)
Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.
1998-04-01
The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Nitzlnader, Michael; Schreier, Günter
2014-01-01
Dealing with data from different source domains is of increasing importance in today's large scale biomedical research endeavours. Within the European Network for Cancer research in Children and Adolescents (ENCCA) a solution to share such data for secondary use will be established. In this paper the solution arising from the aims of the ENCCA project and regulatory requirements concerning data protection and privacy is presented. Since the details of secondary biomedical dataset utilisation are often not known in advance, data protection regulations are met with an identity management concept that facilitates context-specific pseudonymisation and a way of data aggregation using a hidden reference table later on. Phonetic hashing is proposed to prevent duplicated patient registration and re-identification of patients is possible via a trusted third party only. Finally, the solution architecture allows for implementation in a distributed computing environment, including cloud-based elements.
NASA Astrophysics Data System (ADS)
Ahmadia, A. J.; Kees, C. E.
2014-12-01
Developing scientific software is a continuous balance between not reinventing the wheel and getting fragile codes to interoperate with one another. Binary software distributions such as Anaconda provide a robust starting point for many scientific software packages, but this solution alone is insufficient for many scientific software developers. HashDist provides a critical component of the development workflow, enabling highly customizable, source-driven, and reproducible builds for scientific software stacks, available from both the IPython Notebook and the command line. To address these issues, the Coastal and Hydraulics Laboratory at the US Army Engineer Research and Development Center has funded the development of HashDist in collaboration with Simula Research Laboratories and the University of Texas at Austin. HashDist is motivated by a functional approach to package build management, and features intelligent caching of sources and builds, parametrized build specifications, and the ability to interoperate with system compilers and packages. HashDist enables the easy specification of "software stacks", which allow both the novice user to install a default environment and the advanced user to configure every aspect of their build in a modular fashion. As an advanced feature, HashDist builds can be made relocatable, allowing the easy redistribution of binaries on all three major operating systems as well as cloud, and supercomputing platforms. As a final benefit, all HashDist builds are reproducible, with a build hash specifying exactly how each component of the software stack was installed. This talk discusses the role of HashDist in the hydrological sciences, including its use by the Coastal and Hydraulics Laboratory in the development and deployment of the Proteus Toolkit as well as the Rapid Operational Access and Maneuver Support project. We demonstrate HashDist in action, and show how it can effectively support development, deployment, teaching, and reproducibility for scientists working in the hydrological sciences. The HashDist documentation is available from: http://hashdist.readthedocs.org/en/latest/ HashDist is currently hosted at: https://github.com/hashdist/hashdist
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
Chen, Huifang; Xie, Lei
2014-01-01
Self-healing group key distribution (SGKD) aims to deal with the key distribution problem over an unreliable wireless network. In this paper, we investigate the SGKD issue in resource-constrained wireless networks. We propose two improved SGKD schemes using the one-way hash chain (OHC) and the revocation polynomial (RP), the OHC&RP-SGKD schemes. In the proposed OHC&RP-SGKD schemes, by introducing the unique session identifier and binding the joining time with the capability of recovering previous session keys, the problem of the collusion attack between revoked users and new joined users in existing hash chain-based SGKD schemes is resolved. Moreover, novel methods for utilizing the one-way hash chain and constructing the personal secret, the revocation polynomial and the key updating broadcast packet are presented. Hence, the proposed OHC&RP-SGKD schemes eliminate the limitation of the maximum allowed number of revoked users on the maximum allowed number of sessions, increase the maximum allowed number of revoked/colluding users, and reduce the redundancy in the key updating broadcast packet. Performance analysis and simulation results show that the proposed OHC&RP-SGKD schemes are practical for resource-constrained wireless networks in bad environments, where a strong collusion attack resistance is required and many users could be revoked. PMID:25529204
Automated Handling of Garments for Pressing
1991-09-30
Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
A high-speed drug interaction search system for ease of use in the clinical environment.
Takada, Masahiro; Inada, Hiroshi; Nakazawa, Kazuo; Tani, Shoko; Iwata, Michiaki; Sugimoto, Yoshihisa; Nagata, Satoru
2012-12-01
With the advancement of pharmaceutical development, drug interactions have become increasingly complex. As a result, a computer-based drug interaction search system is required to organize the whole of drug interaction data. To overcome problems faced with the existing systems, we developed a drug interaction search system using a hash table, which offers higher processing speeds and easier maintenance operations compared with relational databases (RDB). In order to compare the performance of our system and MySQL RDB in terms of search speed, drug interaction searches were repeated for all 45 possible combinations of two out of a group of 10 drugs for two cases: 5,604 and 56,040 drug interaction data. As the principal result, our system was able to process the search approximately 19 times faster than the system using the MySQL RDB. Our system also has several other merits such as that drug interaction data can be created in comma-separated value (CSV) format, thereby facilitating data maintenance. Although our system uses the well-known method of a hash table, it is expected to resolve problems common to existing systems and to be an effective system that enables the safe management of drugs.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the minimum amount of time. Given a list of numbers, try to find one or more solutions in which, if each number is compressed by use of the modulo function by some value, then a unique value is generated.
A fast exact simulation method for a class of Markov jump processes.
Li, Yao; Hu, Lili
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.
Internet traffic load balancing using dynamic hashing with flow volume
NASA Astrophysics Data System (ADS)
Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.
2002-07-01
Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.
Adoption of the Hash algorithm in a conceptual model for the civil registry of Ecuador
NASA Astrophysics Data System (ADS)
Toapanta, Moisés; Mafla, Enrique; Orizaga, Antonio
2018-04-01
The Hash security algorithm was analyzed in order to mitigate information security in a distributed architecture. The objective of this research is to develop a prototype for the Adoption of the algorithm Hash in a conceptual model for the Civil Registry of Ecuador. The deductive method was used in order to analyze the published articles that have a direct relation with the research project "Algorithms and Security Protocols for the Civil Registry of Ecuador" and articles related to the Hash security algorithm. It resulted from this research: That the SHA-1 security algorithm is appropriate for use in Ecuador's civil registry; we adopted the SHA-1 algorithm used in the flowchart technique and finally we obtained the adoption of the hash algorithm in a conceptual model. It is concluded that from the comparison of the DM5 and SHA-1 algorithm, it is suggested that in the case of an implementation, the SHA-1 algorithm is taken due to the amount of information and data available from the Civil Registry of Ecuador; It is determined that the SHA-1 algorithm that was defined using the flowchart technique can be modified according to the requirements of each institution; the model for adopting the hash algorithm in a conceptual model is a prototype that can be modified according to all the actors that make up each organization.
A fast exact simulation method for a class of Markov jump processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less
A dynamic re-partitioning strategy based on the distribution of key in Spark
NASA Astrophysics Data System (ADS)
Zhang, Tianyu; Lian, Xin
2018-05-01
Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.
Architectural design of an Algol interpreter
NASA Technical Reports Server (NTRS)
Jackson, C. K.
1971-01-01
The design of a syntax-directed interpreter for a subset of Algol is described. It is a conceptual design with sufficient details and completeness but as much independence of implementation as possible. The design includes a detailed description of a scanner, an analyzer described in the Floyd-Evans productions, a hash-coded symbol table, and an executor. Interpretation of sample programs is also provided to show how the interpreter functions.
Improving the Rainbow Attack by Reusing Colours
NASA Astrophysics Data System (ADS)
Ågren, Martin; Johansson, Thomas; Hell, Martin
Hashing or encrypting a key or a password is a vital part in most network security protocols. The most practical generic attack on such schemes is a time memory trade-off attack. Such an attack inverts any one-way function using a trade-off between memory and execution time. Existing techniques include the Hellman attack and the rainbow attack, where the latter uses different reduction functions ("colours") within a table.
Git as an Encrypted Distributed Version Control System
2015-03-01
options. The algorithm uses AES- 256 counter mode with an IV derived from SHA -1-HMAC hash (this is nearly identical to the GCM mode discussed earlier...built into the internal structure of Git. Every file in a Git repository is check summed with a SHA -1 hash, a one-way function with arbitrarily long...implementation. Git-encrypt calls OpenSSL cryptography library command line functions. The default cipher used is AES- 256 - Electronic Code Book (ECB), which is
Robust hashing with local models for approximate similarity search.
Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang
2014-07-01
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
Changes in Benthos Associated with Mussel (Mytilus edulis L.) Farms on the West-Coast of Scotland
Wilding, Thomas A.; Nickell, Thomas D.
2013-01-01
Aquaculture, as a means of food production, is growing rapidly in response to an increasing demand for protein and the over-exploitation of wild fisheries. This expansion includes mussels (family Mytilidae) where production currently stands at 1.5 million tonnes per annum. Mussel culture is frequently perceived as having little environmental impact yet mussel biodeposits and shell debris accumulate around the production site and are linked to changes in the benthos. To assess the extent and nature of changes in benthos associated with mussel farming grab and video sampling around seven mussel farms was conducted. Grab samples were analysed for macrofauna and shell-hash content whilst starfish were counted and the shell-hash cover estimated from video imaging. Shell-hash was patchily distributed and occasionally dominated sediments (maximum of 2116 g per 0.1 m2 grab). Mean shell-hash content decreased rapidly at distances >5 m from the line and, over the distance 1–64 m, decreased by three orders of magnitude. The presence of shell-hash and the distance-from-line influenced macrofaunal assemblages but this effect differed between sites. There was no evidence that mussel farming was associated with changes in macrobenthic diversity, species count or feeding strategy. However, total macrofaunal count was estimated to be 2.5 times higher in close proximity to the lines, compared with 64 m distance, and there was evidence that this effect was conditional on the presence of shell-hash. Starfish density varied considerably between sites but, overall, they were approximately 10 times as abundant close to the mussel-lines compared with 64 m distance. There was no evidence that starfish were more abundant in the presence of shell-hash visible on the sediment surface. In terms of farm-scale benthic impacts these data suggest that mussel farming is a relatively benign way of producing food, compared with intensive fish-farming, in similar environments. PMID:23874583
2010-12-01
yields in 1969 and 1970.78 American and European smugglers facilitated the movement of Afghan hash throughout the world until U.S. drug enforcement...significantly affected opium yields . *Denotes approximation from UNDCP records. Table 4. Sampling of Opium Production (metric tons) According to the...shorter grow-cycle than food crops like wheat, allowing farmers to double-crop with livestock fodder, such as maize following the opium harvest
A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases
Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting
2014-01-01
In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028
A hybrid spatio-temporal data indexing method for trajectory databases.
Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting
2014-07-21
In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.
A novel image retrieval algorithm based on PHOG and LSH
NASA Astrophysics Data System (ADS)
Wu, Hongliang; Wu, Weimin; Peng, Jiajin; Zhang, Junyuan
2017-08-01
PHOG can describe the local shape of the image and its relationship between the spaces. The using of PHOG algorithm to extract image features in image recognition and retrieval and other aspects have achieved good results. In recent years, locality sensitive hashing (LSH) algorithm has been superior to large-scale data in solving near-nearest neighbor problems compared with traditional algorithms. This paper presents a novel image retrieval algorithm based on PHOG and LSH. First, we use PHOG to extract the feature vector of the image, then use L different LSH hash table to reduce the dimension of PHOG texture to index values and map to different bucket, and finally extract the corresponding value of the image in the bucket for second image retrieval using Manhattan distance. This algorithm can adapt to the massive image retrieval, which ensures the high accuracy of the image retrieval and reduces the time complexity of the retrieval. This algorithm is of great significance.
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
Fast perceptual image hash based on cascade algorithm
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya
2017-09-01
In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
Hughes, Richard John; Thrasher, James Thomas; Nordholt, Jane Elizabeth
2016-11-29
Innovations for quantum key management harness quantum communications to form a cryptography system within a public key infrastructure framework. In example implementations, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a Merkle signature scheme (using Winternitz one-time digital signatures or other one-time digital signatures, and Merkle hash trees) to constitute a cryptography system. More generally, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a hash-based signature scheme. This provides a secure way to identify, authenticate, verify, and exchange secret cryptographic keys. Features of the quantum key management innovations further include secure enrollment of users with a registration authority, as well as credential checking and revocation with a certificate authority, where the registration authority and/or certificate authority can be part of the same system as a trusted authority for quantum key distribution.
Collision attack against Tav-128 hash function
NASA Astrophysics Data System (ADS)
Hariyanto, Fajar; Hayat Susanti, Bety
2017-10-01
Tav-128 is a hash function which is designed for Radio Frequency Identification (RFID) authentication protocol. Tav-128 is expected to be a cryptographically secure hash function which meets collision resistance properties. In this research, a collision attack is done to prove whether Tav-128 is a collision resistant hash function. The results show that collisions can be obtained in Tav-128 hash function which means in other word, Tav-128 is not a collision resistant hash function.
On the concept of cryptographic quantum hashing
NASA Astrophysics Data System (ADS)
Ablayev, F.; Ablayev, M.
2015-12-01
In the letter we define the notion of a quantum resistant ((ε ,δ ) -resistant) hash function which consists of a combination of pre-image (one-way) resistance (ε-resistance) and collision resistance (δ-resistance) properties. We present examples and discussion that supports the idea of quantum hashing. We present an explicit quantum hash function which is ‘balanced’, one-way resistant and collision resistant and demonstrate how to build a large family of quantum hash functions. Balanced quantum hash functions need a high degree of entanglement between the qubits. We use a phase transformation technique to express quantum hashing constructions, which is an effective way of mapping hash states to coherent states in a superposition of time-bin modes. The phase transformation technique is ready to be implemented with current optical technology.
[PREPARATION AND BIOCOMPATIBILITY OF IN SITU CROSSLINKING HYALURONIC ACID HYDROGEL].
Liang, Jiabi; Li, Jun; Wang, Ting; Liang, Yuhong; Zou, Xuenong; Zhou, Guangqian; Zhou, Zhiyu
2016-06-08
To fabricate in situ crosslinking hyaluronic acid hydrogel and evaluate its biocompatibility in vitro. The acrylic acid chloride and polyethylene glycol were added to prepare crosslinking agent polyethylene glycol acrylate (PEGDA), and the molecular structure of PEGDA was analyzed by Flourier transformation infrared spectroscopy and 1H nuclear magnetic resonance spectroscopy. Hyaluronic acid hydrogel was chemically modified to prepare hyaluronic acid thiolation (HA-SH). And the degree of HA-SH was analyzed qualitatively and quantitatively by Ellman method. HA-SH solution in concentrations ( W/V ) of 0.5%, 1.0%, and 1.5% and PEGDA solution in concentrations ( W/V ) of 2%, 4%, and 6% were prepared with PBS. The two solutions were mixed in different ratios, and in situ crosslinking hyaluronic acid hydrogel was obtained; the crosslinking time was recorded. The cellular toxicity of in situ crosslinking hyaluronic acid hydrogel (1.5% HA-SH and 4% PEGDA mixed) was tested by L929 cells. Meanwhile, the biocompatibility of hydrogel was tested by co-cultured with human bone mesenchymal stem cells (hBMSCs). Flourier transformation infrared spectroscopy showed that most hydroxyl groups were replaced by acrylate groups; 1H nuclear magnetic resonance spectroscopy showed 3 characteristic peaks of hydrogen representing acrylate and olefinic bond at 5-7 ppm. The thiolation yield of HA-SH was 65.4%. In situ crosslinking time of hyaluronic acid hydrogel was 2 to 70 minutes in the PEGDA concentrations of 2%-6% and HA-SH concentrations of 0.5%-1.5%. The hyaluronic acid hydrogel appeared to be transparent. The toxicity grade of leaching solution of hydrogel was grade 1. hBMSCs grew well and distributed evenly in hydrogel with a very high viability. In situ crosslinking hyaluronic acid hydrogel has low cytotoxicity, good biocompatibility, and controllable crosslinking time, so it could be used as a potential tissue engineered scaffold or repairing material for tissue regeneration.
Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.
Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen
2015-09-01
With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.
Perceptual Audio Hashing Functions
NASA Astrophysics Data System (ADS)
Özer, Hamza; Sankur, Bülent; Memon, Nasir; Anarım, Emin
2005-12-01
Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.
Implementation of cryptographic hash function SHA256 in C++
NASA Astrophysics Data System (ADS)
Shrivastava, Akash
2012-02-01
This abstract explains the implementation of SHA Secure hash algorithm 256 using C++. The SHA-2 is a strong hashing algorithm used in almost all kinds of security applications. The algorithm consists of 2 phases: Preprocessing and hash computation. Preprocessing involves padding a message, parsing the padded message into m-bits blocks, and setting initialization values to be used in the hash computation. It generates a message schedule from padded message and uses that schedule, along with functions, constants, and word operations to iteratively generate a series of hash values. The final hash value generated by the computation is used to determine the message digest. SHA-2 includes a significant number of changes from its predecessor, SHA-1. SHA-2 consists of a set of four hash functions with digests that are 224, 256, 384 or 512 bits. The algorithm outputs a 256 bits message block with an internal state block of 256 bits and initial block size of 512 bits. Maximum message length in bit is generated is 2^64 -1, over all computed over a series of 64 rounds consisting or several operations such as and, or, Xor, Shr, Rot. The code will provide clear understanding of the hash algorithm and generates hash values to retrieve message digest.
Kilb, Debi; Hardebeck, J.L.
2006-01-01
We estimate the strike and dip of three California fault segments (Calaveras, Sargent, and a portion of the San Andreas near San Jaun Bautistia) based on principle component analysis of accurately located microearthquakes. We compare these fault orientations with two different first-motion focal mechanism catalogs: the Northern California Earthquake Data Center (NCEDC) catalog, calculated using the FPFIT algorithm (Reasenberg and Oppenheimer, 1985), and a catalog created using the HASH algorithm that tests mechanism stability relative to seismic velocity model variations and earthquake location (Hardebeck and Shearer, 2002). We assume any disagreement (misfit >30° in strike, dip, or rake) indicates inaccurate focal mechanisms in the catalogs. With this assumption, we can quantify the parameters that identify the most optimally constrained focal mechanisms. For the NCEDC/FPFIT catalogs, we find that the best quantitative discriminator of quality focal mechanisms is the station distribution ratio (STDR) parameter, an indicator of how the stations are distributed about the focal sphere. Requiring STDR > 0.65 increases the acceptable mechanisms from 34%–37% to 63%–68%. This suggests stations should be uniformly distributed surrounding, rather than aligning, known fault traces. For the HASH catalogs, the fault plane uncertainty (FPU) parameter is the best discriminator, increasing the percent of acceptable mechanisms from 63%–78% to 81%–83% when FPU ≤ 35°. The overall higher percentage of acceptable mechanisms and the usefulness of the formal uncertainty in identifying quality mechanisms validate the HASH approach of testing for mechanism stability.
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. PMID:26981584
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment.
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation.
ProGeRF: Proteome and Genome Repeat Finder Utilizing a Fast Parallel Hash Function
Moraes, Walas Jhony Lopes; Rodrigues, Thiago de Souza; Bartholomeu, Daniella Castanheira
2015-01-01
Repetitive element sequences are adjacent, repeating patterns, also called motifs, and can be of different lengths; repetitions can involve their exact or approximate copies. They have been widely used as molecular markers in population biology. Given the sizes of sequenced genomes, various bioinformatics tools have been developed for the extraction of repetitive elements from DNA sequences. However, currently available tools do not provide options for identifying repetitive elements in the genome or proteome, displaying a user-friendly web interface, and performing-exhaustive searches. ProGeRF is a web site for extracting repetitive regions from genome and proteome sequences. It was designed to be efficient, fast, and accurate and primarily user-friendly web tool allowing many ways to view and analyse the results. ProGeRF (Proteome and Genome Repeat Finder) is freely available as a stand-alone program, from which the users can download the source code, and as a web tool. It was developed using the hash table approach to extract perfect and imperfect repetitive regions in a (multi)FASTA file, while allowing a linear time complexity. PMID:25811026
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.
Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo
2016-10-01
Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.
FSH: fast spaced seed hashing exploiting adjacent hashes.
Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia
2018-01-01
Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.
Model-based vision using geometric hashing
NASA Astrophysics Data System (ADS)
Akerman, Alexander, III; Patton, Ronald
1991-04-01
The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.
Digital camera with apparatus for authentication of images produced from an image file
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1993-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.
Hash Bit Selection for Nearest Neighbor Search.
Xianglong Liu; Junfeng He; Shih-Fu Chang
2017-11-01
To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.
Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen
2018-09-01
We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.
Study of the similarity function in Indexing-First-One hashing
NASA Astrophysics Data System (ADS)
Lai, Y.-L.; Jin, Z.; Goi, B.-M.; Chai, T.-Y.
2017-06-01
The recent proposed Indexing-First-One (IFO) hashing is a latest technique that is particularly adopted for eye iris template protection, i.e. IrisCode. However, IFO employs the measure of Jaccard Similarity (JS) initiated from Min-hashing has yet been adequately discussed. In this paper, we explore the nature of JS in binary domain and further propose a mathematical formulation to generalize the usage of JS, which is subsequently verified by using CASIA v3-Interval iris database. Our study reveals that JS applied in IFO hashing is a generalized version in measure two input objects with respect to Min-Hashing where the coefficient of JS is equal to one. With this understanding, IFO hashing can propagate the useful properties of Min-hashing, i.e. similarity preservation, thus favorable for similarity searching or recognition in binary space.
Linear Subspace Ranking Hashing for Cross-Modal Retrieval.
Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A
2017-09-01
Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
Binary Multidimensional Scaling for Hashing.
Huang, Yameng; Lin, Zhouchen
2017-10-04
Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the oversimplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called Binary Multidimensional Scaling (BMDS), which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by Alternating Minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.
Achaete-Scute Homolog 1 Expression Controls Cellular Differentiation of Neuroblastoma
Kasim, Mumtaz; Heß, Vicky; Scholz, Holger; Persson, Pontus B.; Fähling, Michael
2016-01-01
Neuroblastoma, the major cause of infant cancer deaths, results from fast proliferation of undifferentiated neuroblasts. Treatment of high-risk neuroblastoma includes differentiation with retinoic acid (RA); however, the resistance of many of these tumors to RA-induced differentiation poses a considerable challenge. Human achaete-scute homolog 1 (hASH1) is a proneural basic helix-loop-helix transcription factor essential for neurogenesis and is often upregulated in neuroblastoma. Here, we identified a novel function for hASH1 in regulating the differentiation phenotype of neuroblastoma cells. Global analysis of 986 human neuroblastoma datasets revealed a negative correlation between hASH1 and neuron differentiation that was independent of the N-myc (MYCN) oncogene. Using RA to induce neuron differentiation in two neuroblastoma cell lines displaying high and low levels of hASH1 expression, we confirmed the link between hASH1 expression and the differentiation defective phenotype, which was reversed by silencing hASH1 or by hypoxic preconditioning. We further show that hASH1 suppresses neuronal differentiation by inhibiting transcription at the RA receptor element. Collectively, our data indicate hASH1 to be key for understanding neuroblastoma resistance to differentiation therapy and pave the way for hASH1-targeted therapies for augmenting the response of neuroblastoma to differentiation therapy. PMID:28066180
Collision analysis of one kind of chaos-based hash function
NASA Astrophysics Data System (ADS)
Xiao, Di; Peng, Wenbing; Liao, Xiaofeng; Xiang, Tao
2010-02-01
In the last decade, various chaos-based hash functions have been proposed. Nevertheless, the corresponding analyses of them lag far behind. In this Letter, we firstly take a chaos-based hash function proposed very recently in Amin, Faragallah and Abd El-Latif (2009) [11] as a sample to analyze its computational collision problem, and then generalize the construction method of one kind of chaos-based hash function and summarize some attentions to avoid the collision problem. It is beneficial to the hash function design based on chaos in the future.
2008-08-19
1 hash of the page page%d sha256 The segment for the SHA256 hash of the page Bad Sector Management: badsectors The number of sectors in the image...written, AFFLIB can automatically compute the page’s MD5, SHA-1, and/or SHA256 hash and write an associated segment containing the hash value. The...are written into segments themselves, with the segment name being name/ sha256 where name is the original segment name sha256 is the hash algorithm used
Computing quantum hashing in the model of quantum branching programs
NASA Astrophysics Data System (ADS)
Ablayev, Farid; Ablayev, Marat; Vasiliev, Alexander
2018-02-01
We investigate the branching program complexity of quantum hashing. We consider a quantum hash function that maps elements of a finite field into quantum states. We require that this function is preimage-resistant and collision-resistant. We consider two complexity measures for Quantum Branching Programs (QBP): a number of qubits and a number of compu-tational steps. We show that the quantum hash function can be computed efficiently. Moreover, we prove that such QBP construction is optimal. That is, we prove lower bounds that match the constructed quantum hash function computation.
NASA Astrophysics Data System (ADS)
Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin
2014-02-01
3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.
Deep classification hashing for person re-identification
NASA Astrophysics Data System (ADS)
Wang, Jiabao; Li, Yang; Zhang, Xiancai; Miao, Zhuang; Tao, Gang
2018-04-01
As the development of surveillance in public, person re-identification becomes more and more important. The largescale databases call for efficient computation and storage, hashing technique is one of the most important methods. In this paper, we proposed a new deep classification hashing network by introducing a new binary appropriation layer in the traditional ImageNet pre-trained CNN models. It outputs binary appropriate features, which can be easily quantized into binary hash-codes for hamming similarity comparison. Experiments show that our deep hashing method can outperform the state-of-the-art methods on the public CUHK03 and Market1501 datasets.
Implementation of 4-way Superscalar Hash MIPS Processor Using FPGA
NASA Astrophysics Data System (ADS)
Sahib Omran, Safaa; Fouad Jumma, Laith
2018-05-01
Due to the quick advancements in the personal communications systems and wireless communications, giving data security has turned into a more essential subject. This security idea turns into a more confounded subject when next-generation system requirements and constant calculation speed are considered in real-time. Hash functions are among the most essential cryptographic primitives and utilized as a part of the many fields of signature authentication and communication integrity. These functions are utilized to acquire a settled size unique fingerprint or hash value of an arbitrary length of message. In this paper, Secure Hash Algorithms (SHA) of types SHA-1, SHA-2 (SHA-224, SHA-256) and SHA-3 (BLAKE) are implemented on Field-Programmable Gate Array (FPGA) in a processor structure. The design is described and implemented using a hardware description language, namely VHSIC “Very High Speed Integrated Circuit” Hardware Description Language (VHDL). Since the logical operation of the hash types of (SHA-1, SHA-224, SHA-256 and SHA-3) are 32-bits, so a Superscalar Hash Microprocessor without Interlocked Pipelines (MIPS) processor are designed with only few instructions that were required in invoking the desired Hash algorithms, when the four types of hash algorithms executed sequentially using the designed processor, the total time required equal to approximately 342 us, with a throughput of 4.8 Mbps while the required to execute the same four hash algorithms using the designed four-way superscalar is reduced to 237 us with improved the throughput to 5.1 Mbps.
Speaker Linking and Applications using Non-Parametric Hashing Methods
2016-09-08
clustering method based on hashing—canopy- clustering . We apply this method to a large corpus of speaker recordings, demonstrate performance tradeoffs...and compare to other hash- ing methods. Index Terms: speaker recognition, clustering , hashing, locality sensitive hashing. 1. Introduction We assume...speaker in our corpus. Second, given a QBE method, how can we perform speaker clustering —each clustering should be a single speaker, and a cluster should
A strategy to load balancing for non-connectivity MapReduce job
NASA Astrophysics Data System (ADS)
Zhou, Huaping; Liu, Guangzong; Gui, Haixia
2017-09-01
MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.
Sparse Partial Equilibrium Tables in Chemically Resolved Reactive Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitello, P; Fried, L E; Pudliner, B
2003-07-14
The detonation of an energetic material is the result of a complex interaction between kinetic chemical reactions and hydrodynamics. Unfortunately, little is known concerning the detailed chemical kinetics of detonations in energetic materials. CHEETAH uses rate laws to treat species with the slowest chemical reactions, while assuming other chemical species are in equilibrium. CHEETAH supports a wide range of elements and condensed detonation products and can also be applied to gas detonations. A sparse hash table of equation of state values, called the ''cache'' is used in CHEETAH to enhance the efficiency of kinetic reaction calculations. For large-scale parallel hydrodynamicmore » calculations, CHEETAH uses MPI communication to updates to the cache. We present here details of the sparse caching model used in the CHEETAH. To demonstrate the efficiency of modeling using a sparse cache model we consider detonations in energetic materials.« less
Sparse Partial Equilibrium Tables in Chemically Resolved Reactive Flow
NASA Astrophysics Data System (ADS)
Vitello, Peter; Fried, Laurence E.; Pudliner, Brian; McAbee, Tom
2004-07-01
The detonation of an energetic material is the result of a complex interaction between kinetic chemical reactions and hydrodynamics. Unfortunately, little is known concerning the detailed chemical kinetics of detonations in energetic materials. CHEETAH uses rate laws to treat species with the slowest chemical reactions, while assuming other chemical species are in equilibrium. CHEETAH supports a wide range of elements and condensed detonation products and can also be applied to gas detonations. A sparse hash table of equation of state values is used in CHEETAH to enhance the efficiency of kinetic reaction calculations. For large-scale parallel hydrodynamic calculations, CHEETAH uses parallel communication to updates to the cache. We present here details of the sparse caching model used in the CHEETAH coupled to an ALE hydrocode. To demonstrate the efficiency of modeling using a sparse cache model we consider detonations in energetic materials.
Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D; Cui, Licong
2015-11-10
A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f1, f2, ..., fk. The input for each function fi has 3 components: a random number r, an integer n, and input data m. The result, fi(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f1(r1, n1, m1), f2(r2, n2, m2), ..., fk(rk, nk, mk). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash.
Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.
Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong
2017-05-01
Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.
NASA Astrophysics Data System (ADS)
Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo
2017-10-01
This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
A cryptographic hash function based on chaotic network automata
NASA Astrophysics Data System (ADS)
Machicao, Jeaneth; Bruno, Odemir M.
2017-12-01
Chaos theory has been used to develop several cryptographic methods relying on the pseudo-random properties extracted from simple nonlinear systems such as cellular automata (CA). Cryptographic hash functions (CHF) are commonly used to check data integrity. CHF “compress” arbitrary long messages (input) into much smaller representations called hash values or message digest (output), designed to prevent the ability to reverse the hash values into the original message. This paper proposes a chaos-based CHF inspired on an encryption method based on chaotic CA rule B1357-S2468. Here, we propose an hybrid model that combines CA and networks, called network automata (CNA), whose chaotic spatio-temporal outputs are used to compute a hash value. Following the Merkle and Damgård model of construction, a portion of the message is entered as the initial condition of the network automata, so that the rest parts of messages are iteratively entered to perturb the system. The chaotic network automata shuffles the message using flexible control parameters, so that the generated hash value is highly sensitive to the message. As demonstrated in our experiments, the proposed model has excellent pseudo-randomness and sensitivity properties with acceptable performance when compared to conventional hash functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draelos, Timothy John; Dautenhahn, Nathan; Schroeppel, Richard Crabtree
The security of the widely-used cryptographic hash function SHA1 has been impugned. We have developed two replacement hash functions. The first, SHA1X, is a drop-in replacement for SHA1. The second, SANDstorm, has been submitted as a candidate to the NIST-sponsored SHA3 Hash Function competition.
Hash function based on chaotic map lattices.
Wang, Shihong; Hu, Gang
2007-06-01
A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.
Digital data storage systems, computers, and data verification methods
Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.
2005-12-27
Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.
Hash function based on chaotic map lattices
NASA Astrophysics Data System (ADS)
Wang, Shihong; Hu, Gang
2007-06-01
A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.
On the balanced quantum hashing
NASA Astrophysics Data System (ADS)
Ablayev, F.; Ablayev, M.; Vasiliev, A.
2016-02-01
In the paper we define a notion of a resistant quantum hash function which combines a notion of pre-image (one-way) resistance and the notion of collision resistance. In the quantum setting one-way resistance property and collision resistance property are correlated: the “more” a quantum function is one-way resistant the “less” it is collision resistant and vice versa. We present an explicit quantum hash function which is “balanced” one-way resistant and collision resistant and demonstrate how to build a large family of balanced quantum hash functions.
Fully Integrated Passive UHF RFID Tag for Hash-Based Mutual Authentication Protocol.
Mikami, Shugo; Watanabe, Dai; Li, Yang; Sakiyama, Kazuo
2015-01-01
Passive radio-frequency identification (RFID) tag has been used in many applications. While the RFID market is expected to grow, concerns about security and privacy of the RFID tag should be overcome for the future use. To overcome these issues, privacy-preserving authentication protocols based on cryptographic algorithms have been designed. However, to the best of our knowledge, evaluation of the whole tag, which includes an antenna, an analog front end, and a digital processing block, that runs authentication protocols has not been studied. In this paper, we present an implementation and evaluation of a fully integrated passive UHF RFID tag that runs a privacy-preserving mutual authentication protocol based on a hash function. We design a single chip including the analog front end and the digital processing block. We select a lightweight hash function supporting 80-bit security strength and a standard hash function supporting 128-bit security strength. We show that when the lightweight hash function is used, the tag completes the protocol with a reader-tag distance of 10 cm. Similarly, when the standard hash function is used, the tag completes the protocol with the distance of 8.5 cm. We discuss the impact of the peak power consumption of the tag on the distance of the tag due to the hash function.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D
2015-01-01
Background A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. Objective This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. Methods NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f 1, f 2, ..., f k. The input for each function f i has 3 components: a random number r, an integer n, and input data m. The result, f i(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f 1(r 1, n 1, m 1), f 2(r 2, n 2, m 2), ..., f k(r k, n k, m k). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. Results We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). Conclusions The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash. PMID:26554419
System using data compression and hashing adapted for use for multimedia encryption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffland, Douglas R
2011-07-12
A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.
Practical security and privacy attacks against biometric hashing using sparse recovery
NASA Astrophysics Data System (ADS)
Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan
2016-12-01
Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.
Fully Integrated Passive UHF RFID Tag for Hash-Based Mutual Authentication Protocol
Mikami, Shugo; Watanabe, Dai; Li, Yang; Sakiyama, Kazuo
2015-01-01
Passive radio-frequency identification (RFID) tag has been used in many applications. While the RFID market is expected to grow, concerns about security and privacy of the RFID tag should be overcome for the future use. To overcome these issues, privacy-preserving authentication protocols based on cryptographic algorithms have been designed. However, to the best of our knowledge, evaluation of the whole tag, which includes an antenna, an analog front end, and a digital processing block, that runs authentication protocols has not been studied. In this paper, we present an implementation and evaluation of a fully integrated passive UHF RFID tag that runs a privacy-preserving mutual authentication protocol based on a hash function. We design a single chip including the analog front end and the digital processing block. We select a lightweight hash function supporting 80-bit security strength and a standard hash function supporting 128-bit security strength. We show that when the lightweight hash function is used, the tag completes the protocol with a reader-tag distance of 10 cm. Similarly, when the standard hash function is used, the tag completes the protocol with the distance of 8.5 cm. We discuss the impact of the peak power consumption of the tag on the distance of the tag due to the hash function. PMID:26491714
Digital Camera with Apparatus for Authentication of Images Produced from an Image File
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1996-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
Design and implementation of a privacy preserving electronic health record linkage tool in Chicago
Cashy, John P; Jackson, Kathryn L; Pah, Adam R; Goel, Satyender; Boehnke, Jörn; Humphries, John Eric; Kominers, Scott Duke; Hota, Bala N; Sims, Shannon A; Malin, Bradley A; French, Dustin D; Walunas, Theresa L; Meltzer, David O; Kaleba, Erin O; Jones, Roderick C; Galanter, William L
2015-01-01
Objective To design and implement a tool that creates a secure, privacy preserving linkage of electronic health record (EHR) data across multiple sites in a large metropolitan area in the United States (Chicago, IL), for use in clinical research. Methods The authors developed and distributed a software application that performs standardized data cleaning, preprocessing, and hashing of patient identifiers to remove all protected health information. The application creates seeded hash code combinations of patient identifiers using a Health Insurance Portability and Accountability Act compliant SHA-512 algorithm that minimizes re-identification risk. The authors subsequently linked individual records using a central honest broker with an algorithm that assigns weights to hash combinations in order to generate high specificity matches. Results The software application successfully linked and de-duplicated 7 million records across 6 institutions, resulting in a cohort of 5 million unique records. Using a manually reconciled set of 11 292 patients as a gold standard, the software achieved a sensitivity of 96% and a specificity of 100%, with a majority of the missed matches accounted for by patients with both a missing social security number and last name change. Using 3 disease examples, it is demonstrated that the software can reduce duplication of patient records across sites by as much as 28%. Conclusions Software that standardizes the assignment of a unique seeded hash identifier merged through an agreed upon third-party honest broker can enable large-scale secure linkage of EHR data for epidemiologic and public health research. The software algorithm can improve future epidemiologic research by providing more comprehensive data given that patients may make use of multiple healthcare systems. PMID:26104741
Design and implementation of a privacy preserving electronic health record linkage tool in Chicago.
Kho, Abel N; Cashy, John P; Jackson, Kathryn L; Pah, Adam R; Goel, Satyender; Boehnke, Jörn; Humphries, John Eric; Kominers, Scott Duke; Hota, Bala N; Sims, Shannon A; Malin, Bradley A; French, Dustin D; Walunas, Theresa L; Meltzer, David O; Kaleba, Erin O; Jones, Roderick C; Galanter, William L
2015-09-01
To design and implement a tool that creates a secure, privacy preserving linkage of electronic health record (EHR) data across multiple sites in a large metropolitan area in the United States (Chicago, IL), for use in clinical research. The authors developed and distributed a software application that performs standardized data cleaning, preprocessing, and hashing of patient identifiers to remove all protected health information. The application creates seeded hash code combinations of patient identifiers using a Health Insurance Portability and Accountability Act compliant SHA-512 algorithm that minimizes re-identification risk. The authors subsequently linked individual records using a central honest broker with an algorithm that assigns weights to hash combinations in order to generate high specificity matches. The software application successfully linked and de-duplicated 7 million records across 6 institutions, resulting in a cohort of 5 million unique records. Using a manually reconciled set of 11 292 patients as a gold standard, the software achieved a sensitivity of 96% and a specificity of 100%, with a majority of the missed matches accounted for by patients with both a missing social security number and last name change. Using 3 disease examples, it is demonstrated that the software can reduce duplication of patient records across sites by as much as 28%. Software that standardizes the assignment of a unique seeded hash identifier merged through an agreed upon third-party honest broker can enable large-scale secure linkage of EHR data for epidemiologic and public health research. The software algorithm can improve future epidemiologic research by providing more comprehensive data given that patients may make use of multiple healthcare systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L
2006-12-01
Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.
Compact binary hashing for music retrieval
NASA Astrophysics Data System (ADS)
Seo, Jin S.
2014-03-01
With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition
NASA Astrophysics Data System (ADS)
Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua
2018-04-01
Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.
ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallberg, J.; Stagg, A.
2000-10-01
A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less
Quantin, C; Fassa, M; Coatrieux, G; Riandey, B; Trouessin, G; Allaert, F A
2009-02-01
Compiling individual records which come from different sources remains very important for multicenter epidemiological studies, but at the same time European directives or other national legislation concerning nominal data processing have to be respected. These legal aspects can be satisfied by implementing mechanisms that allow anonymization of patient data (such as hashing techniques). Moreover, for security reasons, official recommendations suggest using different cryptographic keys in combination with a cryptographic hash function for each study. Unfortunately, such an anonymization procedure is in contradiction with the common requirement in public health and biomedical research as it becomes almost impossible to link records from separate data collections where the same entity is not referenced in the same way. Solving this paradox by using methodology based on the combination of hashing and enciphering techniques is the main aim of this article. The method relies on one of the best known hashing functions (the secure hash algorithm) to ensure the anonymity of personal information while providing greater resistance to dictionary attacks, combined with encryption techniques. The originality of the method relies on the way the combination of hashing and enciphering techniques is performed: like in asymmetric encryption, two keys are used but the private key depends on the patient's identity. The combination of hashing and enciphering techniques provides a great improvement in the overall security of the proposed scheme. This methodology makes the stored data available for use in the field of public health for the benefit of patients, while respecting legal security requirements.
Microbiological quality of five potato products obtained at retail markets.
Duran, A P; Swartzentruber, A; Lanier, J M; Wentz, B A; Schwab, A H; Barnard, R J; Read, R B
1982-01-01
The microbiological quality of frozen hash brown potatoes, dried hash brown potatoes with onions, frozen french fried potatoes, dried instant mashed potatoes, and potato salad was determined by a national sampling at the retail level. A wide range of results was obtained, with most sampling units of each products having excellent microbiological quality. Geometric mean aerobic plate counts were as follows: dried hash brown potatoes, 270/g; frozen hash brown potatoes with onions, 580/g; frozen french fried potatoes 78/g; dried instant mashed potatoes, 1.1 x 10(3)/g; and potato salad, 3.6 x 10(3)/g. Mean values of coliforms, Escherichia coli, and Staphylococcus aureus were less than 10/g. PMID:6758695
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Forensic hash for multimedia information
NASA Astrophysics Data System (ADS)
Lu, Wenjun; Varna, Avinash L.; Wu, Min
2010-01-01
Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.
A more secure parallel keyed hash function based on chaotic neural network
NASA Astrophysics Data System (ADS)
Huang, Zhongquan
2011-08-01
Although various hash functions based on chaos or chaotic neural network were proposed, most of them can not work efficiently in parallel computing environment. Recently, an algorithm for parallel keyed hash function construction based on chaotic neural network was proposed [13]. However, there is a strict limitation in this scheme that its secret keys must be nonce numbers. In other words, if the keys are used more than once in this scheme, there will be some potential security flaw. In this paper, we analyze the cause of vulnerability of the original one in detail, and then propose the corresponding enhancement measures, which can remove the limitation on the secret keys. Theoretical analysis and computer simulation indicate that the modified hash function is more secure and practical than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function, such as good statistical properties, high message and key sensitivity, and strong collision resistance, etc.
Network-Aware Mechanisms for Tolerating Byzantine Failures in Distributed Systems
2012-01-01
In Digest, we use SHA-256 as the collision-resistant hash function. We use the realization of SHA-256 from the OpenSSL toolkit [48]. The results...vol. 46, pp. 372–378, 1997. [48] “ Openssl project.” [Online]. Available: http://www.openssl.org/ 154 [49] M. J. Fischer, N. A. Lynch, and M. Merritt
A Complete and Accurate Ab Initio Repeat Finding Algorithm.
Lian, Shuaibin; Chen, Xinwu; Wang, Peng; Zhang, Xiaoli; Dai, Xianhua
2016-03-01
It has become clear that repetitive sequences have played multiple roles in eukaryotic genome evolution including increasing genetic diversity through mutation, changes in gene expression and facilitating generation of novel genes. However, identification of repetitive elements can be difficult in the ab initio manner. Currently, some classical ab initio tools of finding repeats have already presented and compared. The completeness and accuracy of detecting repeats of them are little pool. To this end, we proposed a new ab initio repeat finding tool, named HashRepeatFinder, which is based on hash index and word counting. Furthermore, we assessed the performances of HashRepeatFinder with other two famous tools, such as RepeatScout and Repeatfinder, in human genome data hg19. The results indicated the following three conclusions: (1) The completeness of HashRepeatFinder is the best one among these three compared tools in almost all chromosomes, especially in chr9 (8 times of RepeatScout, 10 times of Repeatfinder); (2) in terms of detecting large repeats, HashRepeatFinder also performed best in all chromosomes, especially in chr3 (24 times of RepeatScout and 250 times of Repeatfinder) and chr19 (12 times of RepeatScout and 60 times of Repeatfinder); (3) in terms of accuracy, HashRepeatFinder can merge the abundant repeats with high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.
2003-01-20
This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5),more » and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.« less
Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems
NASA Astrophysics Data System (ADS)
Jia, C. J.; Wang, Y.; Mendl, C. B.; Moritz, B.; Devereaux, T. P.
2018-03-01
We describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the "checkerboard" decomposition of the Hamiltonian matrix for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.
Genomics-Based Security Protocols: From Plaintext to Cipherprotein
NASA Technical Reports Server (NTRS)
Shaw, Harry; Hussein, Sayed; Helgert, Hermann
2011-01-01
The evolving nature of the internet will require continual advances in authentication and confidentiality protocols. Nature provides some clues as to how this can be accomplished in a distributed manner through molecular biology. Cryptography and molecular biology share certain aspects and operations that allow for a set of unified principles to be applied to problems in either venue. A concept for developing security protocols that can be instantiated at the genomics level is presented. A DNA (Deoxyribonucleic acid) inspired hash code system is presented that utilizes concepts from molecular biology. It is a keyed-Hash Message Authentication Code (HMAC) capable of being used in secure mobile Ad hoc networks. It is targeted for applications without an available public key infrastructure. Mechanics of creating the HMAC are presented as well as a prototype HMAC protocol architecture. Security concepts related to the implementation differences between electronic domain security and genomics domain security are discussed.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Anatomy of a hash-based long read sequence mapping algorithm for next generation DNA sequencing.
Misra, Sanchit; Agrawal, Ankit; Liao, Wei-keng; Choudhary, Alok
2011-01-15
Recently, a number of programs have been proposed for mapping short reads to a reference genome. Many of them are heavily optimized for short-read mapping and hence are very efficient for shorter queries, but that makes them inefficient or not applicable for reads longer than 200 bp. However, many sequencers are already generating longer reads and more are expected to follow. For long read sequence mapping, there are limited options; BLAT, SSAHA2, FANGS and BWA-SW are among the popular ones. However, resequencing and personalized medicine need much faster software to map these long sequencing reads to a reference genome to identify SNPs or rare transcripts. We present AGILE (AliGnIng Long rEads), a hash table based high-throughput sequence mapping algorithm for longer 454 reads that uses diagonal multiple seed-match criteria, customized q-gram filtering and a dynamic incremental search approach among other heuristics to optimize every step of the mapping process. In our experiments, we observe that AGILE is more accurate than BLAT, and comparable to BWA-SW and SSAHA2. For practical error rates (< 5%) and read lengths (200-1000 bp), AGILE is significantly faster than BLAT, SSAHA2 and BWA-SW. Even for the other cases, AGILE is comparable to BWA-SW and several times faster than BLAT and SSAHA2. http://www.ece.northwestern.edu/~smi539/agile.html.
Unified Communications: Simplifying DoD Communication Methods
2013-04-18
private key to encrypt the hash. The encrypted hash, together with some other information, such as the hashing algorithm , is known as a digital...virtual private network (VPN). The use of a VPN would allow users to access corporate data while encrypting traffic.35 Another layer of protection would...sign and encrypt emails as well as controlling access to restricted sites. PKI uses a combination of public and private keys for encryption and
Secure Image Hash Comparison for Warhead Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul J.; Jarman, Kenneth D.; Robinson, Sean M.
2014-06-06
The effort to inspect and verify warheads in the context of possible future arms control treaties is rife with security and implementation issues. In this paper we review prior work on perceptual image hashing for template-based warhead verification. Furthermore, we formalize the notion of perceptual hashes and demonstrate that large classes of such functions are likely not cryptographically secure. We close with a brief discussion of fully homomorphic encryption as an alternative technique.
Adaptive Bloom Filter: A Space-Efficient Counting Algorithm for Unpredictable Network Traffic
NASA Astrophysics Data System (ADS)
Matsumoto, Yoshihide; Hazeyama, Hiroaki; Kadobayashi, Youki
The Bloom Filter (BF), a space-and-time-efficient hashcoding method, is used as one of the fundamental modules in several network processing algorithms and applications such as route lookups, cache hits, packet classification, per-flow state management or network monitoring. BF is a simple space-efficient randomized data structure used to represent a data set in order to support membership queries. However, BF generates false positives, and cannot count the number of distinct elements. A counting Bloom Filter (CBF) can count the number of distinct elements, but CBF needs more space than BF. We propose an alternative data structure of CBF, and we called this structure an Adaptive Bloom Filter (ABF). Although ABF uses the same-sized bit-vector used in BF, the number of hash functions employed by ABF is dynamically changed to record the number of appearances of a each key element. Considering the hash collisions, the multiplicity of a each key element on ABF can be estimated from the number of hash functions used to decode the membership of the each key element. Although ABF can realize the same functionality as CBF, ABF requires the same memory size as BF. We describe the construction of ABF and IABF (Improved ABF), and provide a mathematical analysis and simulation using Zipf's distribution. Finally, we show that ABF can be used for an unpredictable data set such as real network traffic.
Bian, Shaoquan; He, Mengmeng; Sui, Junhui; Cai, Hanxu; Sun, Yong; Liang, Jie; Fan, Yujiang; Zhang, Xingdong
2016-04-01
Although the disulfide bond crosslinked hyaluronic acid hydrogels have been reported by many research groups, the major researches were focused on effectively forming hydrogels. However, few researchers paid attention to the potential significance of controlling the hydrogel formation and degradation, improving biocompatibility, reducing the toxicity of exogenous and providing convenience to the clinical operations later on. In this research, the novel controllable self-crosslinking smart hydrogels with in-situ gelation property was prepared by a single component, the thiolated hyaluronic acid derivative (HA-SH), and applied as a three-dimensional scaffold to mimic native extracellular matrix (ECM) for the culture of fibroblasts cells (L929) and chondrocytes. A series of HA-SH hydrogels were prepared depending on different degrees of thiol substitution (ranging from 10 to 60%) and molecule weights of HA (0.1, 0.3 and 1.0 MDa). The gelation time, swelling property and smart degradation behavior of HA-SH hydrogel were evaluated. The results showed that the gelation and degradation time of hydrogels could be controlled by adjusting the component of HA-SH polymers. The storage modulus of HA-SH hydrogels obtained by dynamic modulus analysis (DMA) could be up to 44.6 kPa. In addition, HA-SH hydrogels were investigated as a three-dimensional scaffold for the culture of fibroblasts cells (L929) and chondrocytes cells in vitro and as an injectable hydrogel for delivering chondrocytes cells in vivo. These results illustrated that HA-SH hydrogels with controllable gelation process, intelligent degradation behavior, excellent biocompatibility and convenient operational characteristics supplied potential clinical application capacity for tissue engineering and regenerative medicine. Copyright © 2016 Elsevier B.V. All rights reserved.
1987-10-01
Meatsauce Rissole Potatoes Turkey Nuggets (I/o) Hash Browned Potatoes (I/o) Mashed Potatoes Buttered Mixed Vegetables Toasted Garlic Bread Brussels...Chicken Curry Baked Ham/P/A Sauce Parsley Buttered Potatoes Brown Gravy Hash Browned Potatoes (I/o) Steamed Rice Steamed Carrots Mashed Potatoes...Steak Mashed Potatoes Mashed Potatoes Rissole Potatoes Steamed Rice Hash Browned Potatoes (I/o) Green Beans Steamed Carrots Broccoli w/Cheese sauce
Large-scale Cross-modality Search via Collective Matrix Factorization Hashing.
Ding, Guiguang; Guo, Yuchen; Zhou, Jile; Gao, Yue
2016-09-08
By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, etc) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark datasets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
Security in the CernVM File System and the Frontier Distributed Database Caching System
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.
2014-06-01
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFSmore » and Frontier.« less
Data Collision Prevention with Overflow Hashing Technique in Closed Hash Searching Process
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Nurjamiyah; Rafika Dewi, Arie
2017-12-01
Hash search is a method that can be used for various search processes such as search engines, sorting, machine learning, neural network and so on, in the search process the possibility of collision data can happen and to prevent the occurrence of collision can be done in several ways one of them is to use Overflow technique, the use of this technique perform with varying length of data and this technique can prevent the occurrence of data collisions.
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2010 CFR
2010-01-01
... combination, are salt, sugar (sucrose or dextrose), spice, and flavoring, including essential oils, oleoresins, and other spice extractives. (b) Corned beef hash may contain one or more of the following optional...
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.
Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian
2018-02-23
Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.
Improving management performance of P2PSIP for mobile sensing in wireless overlays.
Sendín-Raña, Pablo; González-Castaño, Francisco Javier; Gómez-Cuba, Felipe; Asorey-Cacheda, Rafael; Pousada-Carballo, José María
2013-11-08
Future wireless communications are heading towards an all-Internet Protocol (all-IP) design, and will rely on the Session Initiation Protocol (SIP) to manage services, such as voice over IP (VoIP). The centralized architecture of traditional SIP has numerous disadvantages for mobile ad hoc services that may be possibly overcome by advanced peer-to-peer (P2P) technologies initially developed for the Internet. In the context of mobile sensing, P2PSIP protocols facilitate decentralized and fast communications with sensor-enabled terminals. Nevertheless, in order to make P2PSIP protocols feasible in mobile sensing networks, it is necessary to minimize overhead transmissions for signaling purposes, which reduces the battery lifetime. In this paper, we present a solution to improve the management of wireless overlay networks by defining an adaptive algorithm for the calculation of refresh time. The main advantage of the proposed algorithm is that it takes into account new parameters, such as the delay between nodes, and provides satisfactory performance and reliability levels at a much lower management overhead than previous approaches. The proposed solution can be applied to many structured P2P overlays or P2PSIP protocols. We evaluate it with Kademlia-based distributed hash tables (DHT) and dSIP.
Improving Management Performance of P2PSIP for Mobile Sensing in Wireless Overlays
Sendín-Raña, Pablo; González-Castaño, Francisco Javier; Gómez-Cuba, Felipe; Asorey-Cacheda, Rafael; Pousada-Carballo, José María
2013-01-01
Future wireless communications are heading towards an all-Internet Protocol (all-IP) design, and will rely on the Session Initiation Protocol (SIP) to manage services, such as voice over IP (VoIP). The centralized architecture of traditional SIP has numerous disadvantages for mobile ad hoc services that may be possibly overcome by advanced peer-to-peer (P2P) technologies initially developed for the Internet. In the context of mobile sensing, P2PSIP protocols facilitate decentralized and fast communications with sensor-enabled terminals. Nevertheless, in order to make P2PSIP protocols feasible in mobile sensing networks, it is necessary to minimize overhead transmissions for signaling purposes, which reduces the battery lifetime. In this paper, we present a solution to improve the management of wireless overlay networks by defining an adaptive algorithm for the calculation of refresh time. The main advantage of the proposed algorithm is that it takes into account new parameters, such as the delay between nodes, and provides satisfactory performance and reliability levels at a much lower management overhead than previous approaches. The proposed solution can be applied to many structured P2P overlays or P2PSIP protocols. We evaluate it with Kademlia-based distributed hash tables (DHT) and dSIP PMID:24217358
2009-12-01
other services for early UNIX systems at Bell labs. In many UNIX based systems, the field added to ‘etc/ passwd ’ file to carry GCOS ID information was...charset, and external. struct options_main { /* Option flags */ opt_flags flags; /* Password files */ struct list_main * passwd ; /* Password file...object PASSWD . It is part of several other data structures. struct PASSWD { int id; char *login; char *passwd_hash; int UID
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
Enhanced K-means clustering with encryption on cloud
NASA Astrophysics Data System (ADS)
Singh, Iqjot; Dwivedi, Prerna; Gupta, Taru; Shynu, P. G.
2017-11-01
This paper tries to solve the problem of storing and managing big files over cloud by implementing hashing on Hadoop in big-data and ensure security while uploading and downloading files. Cloud computing is a term that emphasis on sharing data and facilitates to share infrastructure and resources.[10] Hadoop is an open source software that gives us access to store and manage big files according to our needs on cloud. K-means clustering algorithm is an algorithm used to calculate distance between the centroid of the cluster and the data points. Hashing is a algorithm in which we are storing and retrieving data with hash keys. The hashing algorithm is called as hash function which is used to portray the original data and later to fetch the data stored at the specific key. [17] Encryption is a process to transform electronic data into non readable form known as cipher text. Decryption is the opposite process of encryption, it transforms the cipher text into plain text that the end user can read and understand well. For encryption and decryption we are using Symmetric key cryptographic algorithm. In symmetric key cryptography are using DES algorithm for a secure storage of the files. [3
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Anonymous authenticated communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaver, Cheryl L; Schroeppel, Richard C; Snyder, Lillian A
2007-06-19
A method of performing electronic communications between members of a group wherein the communications are authenticated as being from a member of the group and have not been altered, comprising: generating a plurality of random numbers; distributing in a digital medium the plurality of random numbers to the members of the group; publishing a hash value of contents of the digital medium; distributing to the members of the group public-key-encrypted messages each containing a same token comprising a random number; and encrypting a message with a key generated from the token and the plurality of random numbers.
Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems
Jia, C. J.; Wang, Y.; Mendl, C. B.; ...
2017-12-02
Here, we describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the “checkerboard” decomposition of the Hamiltonian matrixmore » for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.« less
Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, C. J.; Wang, Y.; Mendl, C. B.
Here, we describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the “checkerboard” decomposition of the Hamiltonian matrixmore » for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.« less
Building Application-Related Patient Identifiers: What Solution for a European Country?
Quantin, Catherine; Allaert, François-André; Avillach, Paul; Fassa, Maniane; Riandey, Benoît; Trouessin, Gilles; Cohen, Olivier
2008-01-01
We propose a method utilizing a derived social security number with the same reliability as the social security number. We show the anonymity techniques classically based on unidirectional hash functions (such as the secure hash algorithm (SHA-2) function that can guarantee the security, quality, and reliability of information if these techniques are applied to the Social Security Number). Hashing produces a strictly anonymous code that is always the same for a given individual, and thus enables patient data to be linked. Different solutions are developed and proposed in this article. Hashing the social security number will make it possible to link the information in the personal medical file to other national health information sources with the aim of completing or validating the personal medical record or conducting epidemiological and clinical research. This data linkage would meet the anonymous data requirements of the European directive on data protection. PMID:18401447
Exploiting the HASH Planetary Nebula Research Platform
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan; Frew, David J.
2017-10-01
The HASH (Hong Kong/ AAO/ Strasbourg/ Hα) planetary nebula research platform is a unique data repository with a graphical interface and SQL capability that offers the community powerful, new ways to undertake Galactic PN studies. HASH currently contains multi-wavelength images, spectra, positions, sizes, morphologies and other data whenever available for 2401 true, 447 likely, and 692 possible Galactic PNe, for a total of 3540 objects. An additional 620 Galactic post-AGB stars, pre-PNe, and PPN candidates are included. All objects were classified and evaluated following the precepts and procedures established and developed by our group over the last 15 years. The complete database contains over 6,700 Galactic objects including the many mimics and related phenomena previously mistaken or confused with PNe. Curation and updating currently occurs on a weekly basis to keep the repository as up to date as possible until the official release of HASH v1 planned in the near future.
Comparison of Grouping Methods for Template Extraction from VA Medical Record Text.
Redd, Andrew M; Gundlapalli, Adi V; Divita, Guy; Tran, Le-Thuy; Pettey, Warren B P; Samore, Matthew H
2017-01-01
We investigate options for grouping templates for the purpose of template identification and extraction from electronic medical records. We sampled a corpus of 1000 documents originating from Veterans Health Administration (VA) electronic medical record. We grouped documents through hashing and binning tokens (Hashed) as well as by the top 5% of tokens identified as important through the term frequency inverse document frequency metric (TF-IDF). We then compared the approaches on the number of groups with 3 or more and the resulting longest common subsequences (LCSs) common to all documents in the group. We found that the Hashed method had a higher success rate for finding LCSs, and longer LCSs than the TF-IDF method, however the TF-IDF approach found more groups than the Hashed and subsequently more long sequences, however the average length of LCSs were lower. In conclusion, each algorithm appears to have areas where it appears to be superior.
Distributed Kernelized Locality-Sensitive Hashing for Faster Image Based Navigation
2015-03-26
Facebook, Google, and Yahoo !. Current methods for image retrieval become problematic when implemented on image datasets that can easily reach billions of...correlations. Tech industry leaders like Facebook, Google, and Yahoo ! sort and index even larger volumes of “big data” daily. When attempting to process...open source implementation of Google’s MapReduce programming paradigm [13] which has been used for many different things. Using Apache Hadoop, Yahoo
Text image authenticating algorithm based on MD5-hash function and Henon map
NASA Astrophysics Data System (ADS)
Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue
2017-07-01
In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images
Nijboer, Tanja C W; Gebuis, Titia; te Pas, Susan F; van der Smagt, Maarten J
2011-01-01
We investigated whether simultaneous colour contrast affects the synaesthetic colour experience and normal colour percept in a similar manner. We simultaneously presented a target stimulus (i.e. grapheme) and a reference stimulus (i.e. hash). Either the grapheme or the hash was presented on a saturated background of the same or opposite colour category as the synaesthetic colour and the other stimulus on a grey background. In both conditions, grapheme-colour synaesthetes were asked to colour the hash in a colour similar to the synaesthetic colour of the grapheme. Controls that were pair-matched to the synaesthetes performed the same experiment, but for them, the grapheme was presented in the colour induced by the grapheme in synaesthetes. When graphemes were presented on a grey and the hash on a coloured background, a traditional simultaneous colour-contrast effect was found for controls as well as synaesthetes. When graphemes were presented on colour and the hash on grey, the controls again showed a traditional simultaneous colour-contrast effect, whereas the synaesthetes showed the opposite effect. Our results show that synaesthetic colour experiences differ from normal colour perception; both are susceptible to different surrounding colours, but not in a comparable manner. Copyright © 2010 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Canned, Frozen, or Dehydrated Meat Food... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Hash. 319.302 Section 319.302 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION...
Code of Federal Regulations, 2010 CFR
2010-01-01
... CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Canned, Frozen, or Dehydrated Meat Food... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Hash. 319.302 Section 319.302 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION...
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
NASA Astrophysics Data System (ADS)
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
Feature hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui
2018-03-01
Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.
Pereira de Sousa, Irene; Suchaoin, Wongsakorn; Zupančič, Ožbej; Leichner, Christina; Bernkop-Schnürch, Andreas
2016-11-05
It is the aim of this study to synthesize hyaluronic acid (HA) derivatives bearing mucoadhesive properties and showing prolonged stability at pH 7.4 and under oxidative condition as liquid dosage form. HA was modified by thiolation with l-cysteine (HA-SH) and by conjugation with 2-mercaptonicotinic acid-l-cysteine ligand to obtain an S-protected derivative (HA-MNA). The polymers were characterized by determination of thiol group content and mercaptonicotinic acid content. Cytotoxicity, stability and mucoadhesive properties (rheological evaluation and tensile test) of the polymers were evaluated. HA-SH and HA-MNA could be successfully synthesized with a degree of modification of 5% and 9% of the total moles of carboxylic acid groups, respectively. MTT assay revealed no toxicity for the polymers. HA-SH resulted to be unstable both at pH 7.4 and under oxidative conditions, whereas HA-MNA was stable under both conditions. Rheological assessment showed a 52-fold and a 3-fold increase in viscosity for HA-MNA incubated with mucus compared to unmodified HA and HA-SH, respectively. Tensile evaluation carried out with intestinal and conjunctival mucosa confirmed the higher mucoadhesive properties of HA-MNA compared to HA-SH. According to the presented results, HA-MNA appears to be a potent excipient for the formulation of stable liquid dosage forms showing comparatively high mucodhesive properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
New security infrastructure model for distributed computing systems
NASA Astrophysics Data System (ADS)
Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.
2016-02-01
At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.
2015-09-01
Extremely Lightweight Intrusion Detection (ELIDe) algorithm on an Android -based mobile device. Our results show that the hashing and inner product...approximately 2.5 megabits per second (assuming a normal distribution of packet sizes) with no significant packet loss. 15. SUBJECT TERMS ELIDe, Android , pcap...system (OS). To run ELIDe, the current version was ported for use on Android .4 2.1 Mobile Device After ELIDe was ported to the Android mobile
Secure Minutiae-Based Fingerprint Templates Using Random Triangle Hashing
NASA Astrophysics Data System (ADS)
Jin, Zhe; Jin Teoh, Andrew Beng; Ong, Thian Song; Tee, Connie
Due to privacy concern on the widespread use of biometric authentication systems, biometric template protection has gained great attention in the biometric research recently. It is a challenging task to design a biometric template protection scheme which is anonymous, revocable and noninvertible while maintaining acceptable performance. Many methods have been proposed to resolve this problem, and cancelable biometrics is one of them. In this paper, we propose a scheme coined as Random Triangle Hashing which follows the concept of cancelable biometrics in the fingerprint domain. In this method, re-alignment of fingerprints is not required as all the minutiae are translated into a pre-defined 2 dimensional space based on a reference minutia. After that, the proposed Random Triangle hashing method is used to enforce the one-way property (non-invertibility) of the biometric template. The proposed method is resistant to minor translation error and rotation distortion. Finally, the hash vectors are converted into bit-strings to be stored in the database. The proposed method is evaluated using the public database FVC2004 DB1. An EER of less than 1% is achieved by using the proposed method.
In Vitro and Ex Vivo Evaluation of Novel Curcumin-Loaded Excipient for Buccal Delivery.
Laffleur, Flavia; Schmelzle, Franziska; Ganner, Ariane; Vanicek, Stefan
2017-08-01
This study aimed to develop a mucoadhesive polymeric excipient comprising curcumin for buccal delivery. Curcumin encompasses broad range of benefits such as antioxidant, anti-inflammatory, and chemotherapeutic activity. Hyaluronic acid (HA) as polymeric excipient was modified by immobilization of thiol bearing ligands. L-Cysteine (SH) ethyl ester was covalently attached via amide bond formation between cysteine and the carboxylic moiety of hyaluronic acid. Succeeded synthesis was proved by H-NMR and IR spectra. The obtained thiolated polymer hyaluronic acid ethyl ester (HA-SH) was evaluated in terms of stability, safety, mucoadhesiveness, drug release, and permeation-enhancing properties. HA-SH showed 2.75-fold higher swelling capacity over time in comparison to unmodified polymer. Furthermore, mucoadhesion increased 3.4-fold in case of HA-SH and drug release was increased 1.6-fold versus HA control, respectively. Curcumin-loaded HA-SH exhibits a 4.4-fold higher permeation compared with respective HA. Taking these outcomes in consideration, novel curcumin-loaded excipient, namely thiolated hyaluronic acid ethyl ester appears as promising tool for pharyngeal diseases.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882
AmpliVar: mutation detection in high-throughput sequence from amplicon-based libraries.
Hsu, Arthur L; Kondrashova, Olga; Lunke, Sebastian; Love, Clare J; Meldrum, Cliff; Marquis-Nicholson, Renate; Corboy, Greg; Pham, Kym; Wakefield, Matthew; Waring, Paul M; Taylor, Graham R
2015-04-01
Conventional means of identifying variants in high-throughput sequencing align each read against a reference sequence, and then call variants at each position. Here, we demonstrate an orthogonal means of identifying sequence variation by grouping the reads as amplicons prior to any alignment. We used AmpliVar to make key-value hashes of sequence reads and group reads as individual amplicons using a table of flanking sequences. Low-abundance reads were removed according to a selectable threshold, and reads above this threshold were aligned as groups, rather than as individual reads, permitting the use of sensitive alignment tools. We show that this approach is more sensitive, more specific, and more computationally efficient than comparable methods for the analysis of amplicon-based high-throughput sequencing data. The method can be extended to enable alignment-free confirmation of variants seen in hybridization capture target-enrichment data. © 2015 WILEY PERIODICALS, INC.
Simultenious binary hash and features learning for image retrieval
NASA Astrophysics Data System (ADS)
Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.
2016-05-01
Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.
A new method of cannabis ingestion: the dangers of dabs?
Loflin, Mallory; Earleywine, Mitch
2014-10-01
A new method for administering cannabinoids, called butane hash oil ("dabs"), is gaining popularity among marijuana users. Despite press reports that suggest that "dabbing" is riskier than smoking flower cannabis, no data address whether dabs users experience more problems from use than those who prefer flower cannabis. The present study aimed to gather preliminary information on dabs users and test whether dabs use is associated with more problems than using flower cannabis. Participants (n=357) reported on their history of cannabis use, their experience with hash oil and the process of "dabbing," reasons for choosing "dabs" over other methods, and any problems related to both flower cannabis and butane hash oil. Analyses revealed that using "dabs" created no more problems or accidents than using flower cannabis. Participants did report that "dabs" led to higher tolerance and withdrawal (as defined by the participants), suggesting that the practice might be more likely to lead to symptoms of addiction or dependence. The use of butane hash oil has spread outside of the medical marijuana community, and users view it as significantly more dangerous than other forms of cannabis use. Published by Elsevier Ltd.
Novel Duplicate Address Detection with Hash Function
Song, GuangJia; Ji, ZhenZhou
2016-01-01
Duplicate address detection (DAD) is an important component of the address resolution protocol (ARP) and the neighbor discovery protocol (NDP). DAD determines whether an IP address is in conflict with other nodes. In traditional DAD, the target address to be detected is broadcast through the network, which provides convenience for malicious nodes to attack. A malicious node can send a spoofing reply to prevent the address configuration of a normal node, and thus, a denial-of-service attack is launched. This study proposes a hash method to hide the target address in DAD, which prevents an attack node from launching destination attacks. If the address of a normal node is identical to the detection address, then its hash value should be the same as the “Hash_64” field in the neighboring solicitation message. Consequently, DAD can be successfully completed. This process is called DAD-h. Simulation results indicate that address configuration using DAD-h has a considerably higher success rate when under attack compared with traditional DAD. Comparative analysis shows that DAD-h does not require third-party devices and considerable computing resources; it also provides a lightweight security resolution. PMID:26991901
Image encryption algorithm based on multiple mixed hash functions and cyclic shift
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhu, Xiaoqiang; Wu, Xiangjun; Zhang, Yingqian
2018-08-01
This paper proposes a new one-time pad scheme for chaotic image encryption that is based on the multiple mixed hash functions and the cyclic-shift function. The initial value is generated using both information of the plaintext image and the chaotic sequences, which are calculated from the SHA1 and MD5 hash algorithms. The scrambling sequences are generated by the nonlinear equations and logistic map. This paper aims to improve the deficiencies of traditional Baptista algorithms and its improved algorithms. We employ the cyclic-shift function and piece-wise linear chaotic maps (PWLCM), which give each shift number the characteristics of chaos, to diffuse the image. Experimental results and security analysis show that the new scheme has better security and can resist common attacks.
HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan S.; Frew, David J.
2016-07-01
By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.
Construction of FuzzyFind Dictionary using Golay Coding Transformation for Searching Applications
NASA Astrophysics Data System (ADS)
Kowsari, Kamram
2015-03-01
searching through a large volume of data is very critical for companies, scientists, and searching engines applications due to time complexity and memory complexity. In this paper, a new technique of generating FuzzyFind Dictionary for text mining was introduced. We simply mapped the 23 bits of the English alphabet into a FuzzyFind Dictionary or more than 23 bits by using more FuzzyFind Dictionary, and reflecting the presence or absence of particular letters. This representation preserves closeness of word distortions in terms of closeness of the created binary vectors within Hamming distance of 2 deviations. This paper talks about the Golay Coding Transformation Hash Table and how it can be used on a FuzzyFind Dictionary as a new technology for using in searching through big data. This method is introduced by linear time complexity for generating the dictionary and constant time complexity to access the data and update by new data sets, also updating for new data sets is linear time depends on new data points. This technique is based on searching only for letters of English that each segment has 23 bits, and also we have more than 23-bit and also it could work with more segments as reference table.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
Significance of cannabis use to dental practice.
Maloney, William James
2011-04-01
The illicit use of the three main forms of cannabis-marijuana, hash, hash oil-pose certain obstacles and challenges to the dental professional. There are a number of systemic, as well as oral/head and neck manifestations, associated with cannabis use. Dentists need to be aware of these manifestations in order to take whatever precautions and/or modifications to the proposed treatment that might be necessary.
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
Image Hashes as Templates for Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janik, Tadeusz; Jarman, Kenneth D.; Robinson, Sean M.
2012-07-17
Imaging systems can provide measurements that confidently assess characteristics of nuclear weapons and dismantled weapon components, and such assessment will be needed in future verification for arms control. Yet imaging is often viewed as too intrusive, raising concern about the ability to protect sensitive information. In particular, the prospect of using image-based templates for verifying the presence or absence of a warhead, or of the declared configuration of fissile material in storage, may be rejected out-of-hand as being too vulnerable to violation of information barrier (IB) principles. Development of a rigorous approach for generating and comparing reduced-information templates from images,more » and assessing the security, sensitivity, and robustness of verification using such templates, are needed to address these concerns. We discuss our efforts to develop such a rigorous approach based on a combination of image-feature extraction and encryption-utilizing hash functions to confirm proffered declarations, providing strong classified data security while maintaining high confidence for verification. The proposed work is focused on developing secure, robust, tamper-sensitive and automatic techniques that may enable the comparison of non-sensitive hashed image data outside an IB. It is rooted in research on so-called perceptual hash functions for image comparison, at the interface of signal/image processing, pattern recognition, cryptography, and information theory. Such perceptual or robust image hashing—which, strictly speaking, is not truly cryptographic hashing—has extensive application in content authentication and information retrieval, database search, and security assurance. Applying and extending the principles of perceptual hashing to imaging for arms control, we propose techniques that are sensitive to altering, forging and tampering of the imaged object yet robust and tolerant to content-preserving image distortions and noise. Ensuring that the information contained in the hashed image data (available out-of-IB) cannot be used to extract sensitive information about the imaged object is of primary concern. Thus the techniques are characterized by high unpredictability to guarantee security. We will present an assessment of the performance of our techniques with respect to security, sensitivity and robustness on the basis of a methodical and mathematically precise framework.« less
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
A DRM based on renewable broadcast encryption
NASA Astrophysics Data System (ADS)
Ramkumar, Mahalingam; Memon, Nasir
2005-07-01
We propose an architecture for digital rights management based on a renewable, random key pre-distribution (KPD) scheme, HARPS (hashed random preloaded subsets). The proposed architecture caters for broadcast encryption by a trusted authority (TA) and by "parent" devices (devices used by vendors who manufacture compliant devices) for periodic revocation of devices. The KPD also facilitates broadcast encryption by peer devices, which permits peers to distribute content, and efficiently control access to the content encryption secret using subscription secrets. The underlying KPD also caters for broadcast authentication and mutual authentication of any two devices, irrespective of the vendors manufacturing the device, and thus provides a comprehensive solution for securing interactions between devices taking part in a DRM system.
Felgueiras, Helena P; Wang, L M; Ren, K F; Querido, M M; Jin, Q; Barbosa, M A; Ji, J; Martins, M C L
2017-03-08
Infection and thrombus formation are still the biggest challenges for the success of blood contact medical devices. This work aims the development of an antimicrobial and hemocompatible biomaterial coating through which selective binding of albumin (passivant protein) from the bloodstream is promoted and, thus, adsorption of other proteins responsible for bacterial adhesion and thrombus formation can be prevented. Polyurethane (PU) films were coated with hyaluronic acid, an antifouling agent, that was previously modified with thiol groups (HA-SH), using polydopamine as the binding agent. Octadecyl acrylate (C18) was used to attract albumin since it resembles the circulating free fatty acids and albumin is a fatty acid transporter. Thiol-ene "click chemistry" was explored for C18 immobilization on HA-SH through a covalent bond between the thiol groups from the HA and the alkene groups from the C18 chains. Surfaces were prepared with different C18 concentrations (0, 5, 10, and 20%) and successful immobilization was demonstrated by scanning electron microscopy (SEM), water contact angle determinations, X-ray photoelectron spectroscopy (XPS) and Fourier transform infrared spectroscopy (FTIR). The ability of surfaces to bind albumin selectively was determined by quartz crystal microbalance with dissipation (QCM-D). Albumin adsorption increased in response to the hydrophobic nature of the surfaces, which augmented with C18 saturation. HA-SH coating reduced albumin adsorption to PU. C18 immobilized onto HA-SH at 5% promoted selective binding of albumin, decreased Staphylococcus aureus adhesion and prevented platelet adhesion and activation to PU in the presence of human plasma. C18/HA-SH coating was established as an innovative and promising strategy to improve the antimicrobial properties and hemocompatibility of any blood contact medical device.
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng
2015-11-01
To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.
Range image registration based on hash map and moth-flame optimization
NASA Astrophysics Data System (ADS)
Zou, Li; Ge, Baozhen; Chen, Lei
2018-03-01
Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.
Matching CCD images to a stellar catalog using locality-sensitive hashing
NASA Astrophysics Data System (ADS)
Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu
2018-02-01
The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.
Effective bandwidth guaranteed routing schemes for MPLS traffic engineering
NASA Astrophysics Data System (ADS)
Wang, Bin; Jain, Nidhi
2001-07-01
In this work, we present online algorithms for dynamic routing bandwidth guaranteed label switched paths (LSPs) where LSP set-up requests (in terms of a pair of ingress and egress routers as well as its bandwidth requirement) arrive one by one and there is no a priori knowledge regarding future LSP set-up requests. In addition, we consider rerouting of LSPs in this work. Rerouting of LSPs has not been well studied in previous work on LSP routing. The need of LSP rerouting arises in a number of ways: occurrence of faults (link and/or node failures), re-optimization of existing LSPs' routes to accommodate traffic fluctuation, requests with higher priorities, and so on. We formulate the bandwidth guaranteed LSP routing with rerouting capability as a multi-commodity flow problem. The solution to this problem is used as the benchmark for comparing other computationally less costly algorithms studied in this paper. Furthermore, to more efficiently utilize the network resources, we propose online routing algorithms which route bandwidth demands over multiple paths at the ingress router to satisfy the customer requests while providing better service survivability. Traffic splitting and distribution over the multiple paths are carefully handled using table-based hashing schemes while the order of packets within a flow is preserved. Preliminary simulations are conducted to show the performance of different design choices and the effectiveness of the rerouting and multi-path routing algorithms in terms of LSP set-up request rejection probability and bandwidth blocking probability.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Hash Functions and Information Theoretic Security
NASA Astrophysics Data System (ADS)
Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.
Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.
A Tree Locality-Sensitive Hash for Secure Software Testing
2017-09-14
errors, or to look for vulnerabilities that could allow a nefarious actor to use our software against us. Ultimately, all testing is designed to find...and an equivalent number of feasible paths discovered by Klee. 1.5 Summary This document the Tree Locality-Sensitive Hash (TLSH), a locality-senstive...performing two groups of tests that verify the accuracy and usefulness of TLSH. Chapter 5 summarizes the contents of the dissertation and lists avenues
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David
2017-01-01
Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong
2017-02-17
As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.
Local Anesthetic Microencapsulation.
1983-11-04
tollowing I.M. injection of microencapsulated lidocaine and etidocaine than following solution injections. Local toxicity of these microcapsule injections...Distribution 41 Table 12 Processing Summary of Lidocaine (Base) 43 Microencapsulation Table 13 Lidocaine (Base) Microcapsule Size 44 Distribution...Table 14 Processing Summary of Et’idocaine-HCl 45 Microencapsulation Table 15 Etidocaine-HCl Microcapsule Size 47 Distribution Table 16 Process Summary
26 CFR 1.401(a)(9)-9 - Life expectancy and distribution period tables.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 5 2014-04-01 2014-04-01 false Life expectancy and distribution period tables... Plans, Etc. § 1.401(a)(9)-9 Life expectancy and distribution period tables. Q-1. What is the life...)(9)? A-1 The following table, referred to as the Single Life Table, is used for determining the life...
NASA Astrophysics Data System (ADS)
Kiktenko, E. O.; Pozhar, N. O.; Anufriev, M. N.; Trushechkin, A. S.; Yunusov, R. R.; Kurochkin, Y. V.; Lvovsky, A. I.; Fedorov, A. K.
2018-07-01
Blockchain is a distributed database which is cryptographically protected against malicious modifications. While promising for a wide range of applications, current blockchain platforms rely on digital signatures, which are vulnerable to attacks by means of quantum computers. The same, albeit to a lesser extent, applies to cryptographic hash functions that are used in preparing new blocks, so parties with access to quantum computation would have unfair advantage in procuring mining rewards. Here we propose a possible solution to the quantum era blockchain challenge and report an experimental realization of a quantum-safe blockchain platform that utilizes quantum key distribution across an urban fiber network for information-theoretically secure authentication. These results address important questions about realizability and scalability of quantum-safe blockchains for commercial and governmental applications.
The Amordad database engine for metagenomics.
Behnam, Ehsan; Smith, Andrew D
2014-10-15
Several technical challenges in metagenomic data analysis, including assembling metagenomic sequence data or identifying operational taxonomic units, are both significant and well known. These forms of analysis are increasingly cited as conceptually flawed, given the extreme variation within traditionally defined species and rampant horizontal gene transfer. Furthermore, computational requirements of such analysis have hindered content-based organization of metagenomic data at large scale. In this article, we introduce the Amordad database engine for alignment-free, content-based indexing of metagenomic datasets. Amordad places the metagenome comparison problem in a geometric context, and uses an indexing strategy that combines random hashing with a regular nearest neighbor graph. This framework allows refinement of the database over time by continual application of random hash functions, with the effect of each hash function encoded in the nearest neighbor graph. This eliminates the need to explicitly maintain the hash functions in order for query efficiency to benefit from the accumulated randomness. Results on real and simulated data show that Amordad can support logarithmic query time for identifying similar metagenomes even as the database size reaches into the millions. Source code, licensed under the GNU general public license (version 3) is freely available for download from http://smithlabresearch.org/amordad andrewds@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Stable carbon and oxygen isotope record of central Lake Erie sediments
Tevesz, M.J.S.; Spongberg, A.L.; Fuller, J.A.
1998-01-01
Stable carbon and oxygen isotope data from mollusc aragonite extracted from sediment cores provide new information on the origin and history of sedimentation in the southwestern area of the central basin of Lake Erie. Sediments infilling the Sandusky subbasin consist of three lithologic units overlying glacial deposits. The lowest of these is a soft gray mud overlain by a shell hash layer containing Sphaerium striatinum fragments. A fluid mud unit caps the shell hash layer and extends upwards to the sediment-water interface. New stable isotope data suggest that the soft gray mud unit is of postglacial, rather than proglacial, origin. These data also suggest that the shell hash layer was derived from erosional winnowing of the underlying soft gray mud layer. This winnowing event may have occurred as a result of the Nipissing flood. The Pelee-Lorain moraine, which forms the eastern boundary of the Sandusky subbasin, is an elevated area of till capped by a sand deposit that originated as a beach. The presence of both the shell hash layer and relict beach deposit strengthens the interpretation that the Nipissing flood was a critical event in the development of the southwestern area of the central basin of Lake Erie. This event, which returned drainage from the upper lakes to the Lake Erie basin, was a dominant influence on regional stratigraphy, bathymetry, and depositional setting.
Generating region proposals for histopathological whole slide image retrieval.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu; Shi, Jun
2018-06-01
Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems. Copyright © 2018 Elsevier B.V. All rights reserved.
Improving Sector Hash Carving with Rule-Based and Entropy-Based Non-Probative Block Filters
2015-03-01
0x20 exceeds the histogram rule’s threshold of 256 instances of a single 4-byte value. The 0x20 bytes are part of an Extensible Metadata Platform (XMP...block consists of data separated by NULL bytes of padding. The histogram rule is triggered for the block because the block contains more than 256 4...sdash can reduce the rate of false positive matches. After characteristic features have been selected, the features are hashed using SHA -1, which creates
Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns
NASA Astrophysics Data System (ADS)
Deiotte, R.; La Valley, R.
2017-10-01
The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.
A Study on the Secure User Profiling Structure and Procedure for Home Healthcare Systems.
Ko, Hoon; Song, MoonBae
2016-01-01
Despite of various benefits such as a convenience and efficiency, home healthcare systems have some inherent security risks that may cause a serious leak on personal health information. This work presents a Secure User Profiling Structure which has the patient information including their health information. A patient and a hospital keep it at that same time, they share the updated data. While they share the data and communicate, the data can be leaked. To solve the security problems, a secure communication channel with a hash function and an One-Time Password between a client and a hospital should be established and to generate an input value to an OTP, it uses a dual hash-function. This work presents a dual hash function-based approach to generate the One-Time Password ensuring a secure communication channel with the secured key. In result, attackers are unable to decrypt the leaked information because of the secured key; in addition, the proposed method outperforms the existing methods in terms of computation cost.
Cryptographic framework for document-objects resulting from multiparty collaborative transactions.
Goh, A
2000-01-01
Multiparty transactional frameworks--i.e. Electronic Data Interchange (EDI) or Health Level (HL) 7--often result in composite documents which can be accurately modelled using hyperlinked document-objects. The structural complexity arising from multiauthor involvement and transaction-specific sequencing would be poorly handled by conventional digital signature schemes based on a single evaluation of a one-way hash function and asymmetric cryptography. In this paper we outline the generation of structure-specific authentication hash-trees for the the authentication of transactional document-objects, followed by asymmetric signature generation on the hash-tree value. Server-side multi-client signature verification would probably constitute the single most compute-intensive task, hence the motivation for our usage of the Rabin signature protocol which results in significantly reduced verification workloads compared to the more commonly applied Rivest-Shamir-Adleman (RSA) protocol. Data privacy is handled via symmetric encryption of message traffic using session-specific keys obtained through key-negotiation mechanisms based on discrete-logarithm cryptography. Individual client-to-server channels can be secured using a double key-pair variation of Diffie-Hellman (DH) key negotiation, usage of which also enables bidirectional node authentication. The reciprocal server-to-client multicast channel is secured through Burmester-Desmedt (BD) key-negotiation which enjoys significant advantages over the usual multiparty extensions to the DH protocol. The implementation of hash-tree signatures and bi/multidirectional key negotiation results in a comprehensive cryptographic framework for multiparty document-objects satisfying both authentication and data privacy requirements.
An image retrieval framework for real-time endoscopic image retargeting.
Ye, Menglong; Johns, Edward; Walter, Benjamin; Meining, Alexander; Yang, Guang-Zhong
2017-08-01
Serial endoscopic examinations of a patient are important for early diagnosis of malignancies in the gastrointestinal tract. However, retargeting for optical biopsy is challenging due to extensive tissue variations between examinations, requiring the method to be tolerant to these changes whilst enabling real-time retargeting. This work presents an image retrieval framework for inter-examination retargeting. We propose both a novel image descriptor tolerant of long-term tissue changes and a novel descriptor matching method in real time. The descriptor is based on histograms generated from regional intensity comparisons over multiple scales, offering stability over long-term appearance changes at the higher levels, whilst remaining discriminative at the lower levels. The matching method then learns a hashing function using random forests, to compress the string and allow for fast image comparison by a simple Hamming distance metric. A dataset that contains 13 in vivo gastrointestinal videos was collected from six patients, representing serial examinations of each patient, which includes videos captured with significant time intervals. Precision-recall for retargeting shows that our new descriptor outperforms a number of alternative descriptors, whilst our hashing method outperforms a number of alternative hashing approaches. We have proposed a novel framework for optical biopsy in serial endoscopic examinations. A new descriptor, combined with a novel hashing method, achieves state-of-the-art retargeting, with validation on in vivo videos from six patients. Real-time performance also allows for practical integration without disturbing the existing clinical workflow.
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Croce Ferri, Lucilla
2003-06-01
Our paper addresses two issues of a biometric authentication algorithm for ID cardholders previously presented namely the security of the embedded reference data and the aging process of the biometric data. We describe a protocol that allows two levels of verification, combining a biometric hash technique based on handwritten signature and hologram watermarks with cryptographic signatures in a verification infrastructure. This infrastructure consists of a Trusted Central Public Authority (TCPA), which serves numerous Enrollment Stations (ES) in a secure environment. Each individual performs an enrollment at an ES, which provides the TCPA with the full biometric reference data and a document hash. The TCPA then calculates the authentication record (AR) with the biometric hash, a validity timestamp, and a document hash provided by the ES. The AR is then signed with a cryptographic signature function, initialized with the TCPA's private key and embedded in the ID card as a watermark. Authentication is performed at Verification Stations (VS), where the ID card will be scanned and the signed AR is retrieved from the watermark. Due to the timestamp mechanism and a two level biometric verification technique based on offline and online features, the AR can deal with the aging process of the biometric feature by forcing a re-enrollment of the user after expiry, making use of the ES infrastructure. We describe some attack scenarios and we illustrate the watermarking embedding, retrieval and dispute protocols, analyzing their requisites, advantages and disadvantages in relation to security requirements.
Local Anesthetic Microcapsules.
1981-04-15
III Chemical Structure of Local Anesthetics 12 Table IV Processing Summary of Lidocaine Microencapsulation 15 Table V Lidocaine Microcapsule Size...Distribution 17 Table VI Processing Summary of Etidocaine Microencapsulation 18 Table VII Etidocaine Microcapsule Size Distribution 19 Table VIII Lidocaine...REPORT I PERIOD COVERED Annual Local Anesthetic Microcapsules 1 July 1980-30 March 1981 6. PERFORMING ORG. REPORT NUMBER 2106-1 7. AUTHOR() S
Using global unique identifiers to link autism collections.
Johnson, Stephen B; Whitney, Glen; McAuliffe, Matthew; Wang, Hailong; McCreedy, Evan; Rozenblit, Leon; Evans, Clark C
2010-01-01
To propose a centralized method for generating global unique identifiers to link collections of research data and specimens. The work is a collaboration between the Simons Foundation Autism Research Initiative and the National Database for Autism Research. The system is implemented as a web service: an investigator inputs identifying information about a participant into a client application and sends encrypted information to a server application, which returns a generated global unique identifier. The authors evaluated the system using a volume test of one million simulated individuals and a field test on 2000 families (over 8000 individual participants) in an autism study. Inverse probability of hash codes; rate of false identity of two individuals; rate of false split of single individual; percentage of subjects for which identifying information could be collected; percentage of hash codes generated successfully. Large-volume simulation generated no false splits or false identity. Field testing in the Simons Foundation Autism Research Initiative Simplex Collection produced identifiers for 96% of children in the study and 77% of parents. On average, four out of five hash codes per subject were generated perfectly (only one perfect hash is required for subsequent matching). The system must achieve balance among the competing goals of distinguishing individuals, collecting accurate information for matching, and protecting confidentiality. Considerable effort is required to obtain approval from institutional review boards, obtain consent from participants, and to achieve compliance from sites during a multicenter study. Generic unique identifiers have the potential to link collections of research data, augment the amount and types of data available for individuals, support detection of overlap between collections, and facilitate replication of research findings.
NASA Astrophysics Data System (ADS)
Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi
2018-01-01
On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.
Twitter K-H networks in action: Advancing biomedical literature for drug search.
Hamed, Ahmed Abdeen; Wu, Xindong; Erickson, Robert; Fandy, Tamer
2015-08-01
The importance of searching biomedical literature for drug interaction and side-effects is apparent. Current digital libraries (e.g., PubMed) suffer infrequent tagging and metadata annotation updates. Such limitations cause absence of linking literature to new scientific evidence. This demonstrates a great deal of challenges that stand in the way of scientists when searching biomedical repositories. In this paper, we present a network mining approach that provides a bridge for linking and searching drug-related literature. Our contributions here are two fold: (1) an efficient algorithm called HashPairMiner to address the run-time complexity issues demonstrated in its predecessor algorithm: HashnetMiner, and (2) a database of discoveries hosted on the web to facilitate literature search using the results produced by HashPairMiner. Though the K-H network model and the HashPairMiner algorithm are fairly young, their outcome is evidence of the considerable promise they offer to the biomedical science community in general and the drug research community in particular. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Ya-ping; Ji, Wei-xiao; Zhang, Chang-wen; Li, Ping; Li, Feng; Ren, Miao-juan; Chen, Xin-Lian; Yuan, Min; Wang, Pei-ji
2016-01-01
Discovery of two-dimensional (2D) topological insulator such as group-V films initiates challenges in exploring exotic quantum states in low dimensions. Here, we perform first-principles calculations to study the geometric and electronic properties in 2D arsenene monolayer with hydrogenation (HAsH). We predict a new σ-type Dirac cone related to the px,y orbitals of As atoms in HAsH, dependent on in-plane tensile strain. Noticeably, the spin-orbit coupling (SOC) opens a quantum spin Hall (QSH) gap of 193 meV at the Dirac cone. A single pair of topologically protected helical edge states is established for the edges, and its QSH phase is confirmed with topological invariant Z2 = 1. We also propose a 2D quantum well (QW) encapsulating HAsH with the h-BN sheet on each side, which harbors a nontrivial QSH state with the Dirac cone lying within the band gap of cladding BN substrate. These findings provide a promising innovative platform for QSH device design and fabrication operating at room temperature. PMID:26839209
Supervised graph hashing for histopathology image retrieval and classification.
Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin
2017-12-01
In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market.
Cocco, Luisanna; Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the "Bitcoin Blockchain", creating the first block of the chain hashing on his computer's central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU's generation. They are GPU's, FPGA's and ASIC's generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU's generation, the first with economic significance. The model reproduces some "stylized facts" found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network.
NASA Astrophysics Data System (ADS)
Wang, Ya-Ping; Ji, Wei-Xiao; Zhang, Chang-Wen; Li, Ping; Li, Feng; Ren, Miao-Juan; Chen, Xin-Lian; Yuan, Min; Wang, Pei-Ji
2016-02-01
Discovery of two-dimensional (2D) topological insulator such as group-V films initiates challenges in exploring exotic quantum states in low dimensions. Here, we perform first-principles calculations to study the geometric and electronic properties in 2D arsenene monolayer with hydrogenation (HAsH). We predict a new σ-type Dirac cone related to the px,y orbitals of As atoms in HAsH, dependent on in-plane tensile strain. Noticeably, the spin-orbit coupling (SOC) opens a quantum spin Hall (QSH) gap of 193 meV at the Dirac cone. A single pair of topologically protected helical edge states is established for the edges, and its QSH phase is confirmed with topological invariant Z2 = 1. We also propose a 2D quantum well (QW) encapsulating HAsH with the h-BN sheet on each side, which harbors a nontrivial QSH state with the Dirac cone lying within the band gap of cladding BN substrate. These findings provide a promising innovative platform for QSH device design and fabrication operating at room temperature.
Three-dimensional model-based object recognition and segmentation in cluttered scenes.
Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn
2006-10-01
Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.
2009-03-01
policy, elliptic curve public key cryptography using the 256 -bit prime modulus elliptic curve as specified in FIPS-186-2 and SHA - 256 are appropriate for...publications/fips/fips186-2/fips186-2-change1.pdf 76 I P ART I . CH A PT E R 5 Hashing via the Secure Hash Algorithm (using SHA - 256 and...lithography and processing techniques. Field programmable gate arrays ( FPGAs ) are a chip design of interest. These devices are extensively used in
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.
Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-03-28
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target
Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-01-01
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...
Relay discovery and selection for large-scale P2P streaming
Zhang, Chengwei; Wang, Angela Yunxian
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384
Relay discovery and selection for large-scale P2P streaming.
Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.
A hash based mutual RFID tag authentication protocol in telecare medicine information system.
Srivastava, Keerti; Awasthi, Amit K; Kaul, Sonam D; Mittal, R C
2015-01-01
Radio Frequency Identification (RFID) is a technology which has multidimensional applications to reduce the complexity of today life. Everywhere, like access control, transportation, real-time inventory, asset management and automated payment systems etc., RFID has its enormous use. Recently, this technology is opening its wings in healthcare environments, where potential applications include patient monitoring, object traceability and drug administration systems etc. In this paper, we propose a secure RFID-based protocol for the medical sector. This protocol is based on hash operation with synchronized secret. The protocol is safe against active and passive attacks such as forgery, traceability, replay and de-synchronization attack.
Graph Coarsening for Path Finding in Cybersecurity Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Johnson, John R.; Halappanavar, Mahantesh
2013-01-01
n the pass-the-hash attack, hackers repeatedly steal password hashes and move through a computer network with the goal of reaching a computer with high level administrative privileges. In this paper we apply graph coarsening in network graphs for the purpose of detecting hackers using this attack or assessing the risk level of the network's current state. We repeatedly take graph minors, which preserve the existence of paths in the graph, and take powers of the adjacency matrix to count the paths. This allows us to detect the existence of paths as well as find paths that have high risk ofmore » being used by adversaries.« less
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market
Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the “Bitcoin Blockchain”, creating the first block of the chain hashing on his computer’s central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU’s generation. They are GPU’s, FPGA’s and ASIC’s generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU’s generation, the first with economic significance. The model reproduces some “stylized facts” found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network. PMID:27768691
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Fast implementation of length-adaptive privacy amplification in quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Chun-Mei; Li, Mo; Huang, Jing-Zheng; Patcharapong, Treeviriyanupab; Li, Hong-Wei; Li, Fang-Yi; Wang, Chuan; Yin, Zhen-Qiang; Chen, Wei; Keattisak, Sripimanwat; Han, Zhen-Fu
2014-09-01
Post-processing is indispensable in quantum key distribution (QKD), which is aimed at sharing secret keys between two distant parties. It mainly consists of key reconciliation and privacy amplification, which is used for sharing the same keys and for distilling unconditional secret keys. In this paper, we focus on speeding up the privacy amplification process by choosing a simple multiplicative universal class of hash functions. By constructing an optimal multiplication algorithm based on four basic multiplication algorithms, we give a fast software implementation of length-adaptive privacy amplification. “Length-adaptive” indicates that the implementation of privacy amplification automatically adapts to different lengths of input blocks. When the lengths of the input blocks are 1 Mbit and 10 Mbit, the speed of privacy amplification can be as fast as 14.86 Mbps and 10.88 Mbps, respectively. Thus, it is practical for GHz or even higher repetition frequency QKD systems.
NASA Astrophysics Data System (ADS)
White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.
2012-06-01
Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Permitted Tolerance for Conducting Radiative Tests E Table E-2 to Subpart E of Part 53 Protection of... Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53, Subpt. E, Table E-2 Table E-2 to Subpart E of Part 53—Spectral Energy Distribution and Permitted Tolerance for...
Rotation invariant deep binary hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
The Hong Kong/AAO/Strasbourg Hα (HASH) Planetary Nebula Database
NASA Astrophysics Data System (ADS)
Bojičić, Ivan S.; Parker, Quentin A.; Frew, David J.
2017-10-01
The Hong Kong/AAO/Strasbourg Hα (HASH) planetary nebula database is an online research platform providing free and easy access to the largest and most comprehensive catalogue of known Galactic PNe and a repository of observational data (imaging and spectroscopy) for these and related astronomical objects. The main motivation for creating this system is resolving some of long standing problems in the field e.g. problems with mimics and dubious and/or misidentifications, errors in observational data and consolidation of the widely scattered data-sets. This facility allows researchers quick and easy access to the archived and new observational data and creating and sharing of non-redundant PN samples and catalogues.
Comparison of Various Similarity Measures for Average Image Hash in Mobile Phone Application
NASA Astrophysics Data System (ADS)
Farisa Chaerul Haviana, Sam; Taufik, Muhammad
2017-04-01
One of the main issue in Content Based Image Retrieval (CIBR) is similarity measures for resulting image hashes. The main key challenge is to find the most benefits distance or similarity measures for calculating the similarity in term of speed and computing costs, specially under limited computing capabilities device like mobile phone. This study we utilize twelve most common and popular distance or similarity measures technique implemented in mobile phone application, to be compared and studied. The results show that all similarity measures implemented in this study was perform equally under mobile phone application. This gives more possibilities for method combinations to be implemented for image retrieval.
Planetary Nebula Candidates Uncovered with the HASH Research Platform
NASA Astrophysics Data System (ADS)
Fragkou, Vasiliki; Bojičić, Ivan; Frew, David; Parker, Quentin
2017-10-01
A detailed examination of new high quality radio catalogues (e.g. Cornish) in combination with available mid-infrared (MIR) satellite imagery (e.g. Glimpse) has allowed us to find 70 new planetary nebula (PN) candidates based on existing knowledge of their typical colors and fluxes. To further examine the nature of these sources, multiple diagnostic tools have been applied to these candidates based on published data and on available imagery in the HASH (Hong Kong/ AAO/ Strasbourg Hα planetary nebula) research platform. Some candidates have previously-missed optical counterparts allowing for spectroscopic follow-up. Indeed, the single object spectroscopically observed so far has turned out to be a bona fide PN.
Some Reflections on the Periodic Table and Its Use.
ERIC Educational Resources Information Center
Fernelius, W. Conard
1986-01-01
Discusses early periodic tables; effect on the periodic table of atomic numbers; the periodic table in relation to electron distribution in the atoms of elements; terms and concepts related to the table; and the modern basis of the periodic table. Additional comments about these and other topics are included. (JN)
NASA Astrophysics Data System (ADS)
Wai Kuan, Yip; Teoh, Andrew B. J.; Ngo, David C. L.
2006-12-01
We introduce a novel method for secure computation of biometric hash on dynamic hand signatures using BioPhasor mixing and[InlineEquation not available: see fulltext.] discretization. The use of BioPhasor as the mixing process provides a one-way transformation that precludes exact recovery of the biometric vector from compromised hashes and stolen tokens. In addition, our user-specific[InlineEquation not available: see fulltext.] discretization acts both as an error correction step as well as a real-to-binary space converter. We also propose a new method of extracting compressed representation of dynamic hand signatures using discrete wavelet transform (DWT) and discrete fourier transform (DFT). Without the conventional use of dynamic time warping, the proposed method avoids storage of user's hand signature template. This is an important consideration for protecting the privacy of the biometric owner. Our results show that the proposed method could produce stable and distinguishable bit strings with equal error rates (EERs) of[InlineEquation not available: see fulltext.] and[InlineEquation not available: see fulltext.] for random and skilled forgeries for stolen token (worst case) scenario, and[InlineEquation not available: see fulltext.] for both forgeries in the genuine token (optimal) scenario.
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Limitations and requirements of content-based multimedia authentication systems
NASA Astrophysics Data System (ADS)
Wu, Chai W.
2001-08-01
Recently, a number of authentication schemes have been proposed for multimedia data such as images and sound data. They include both label based systems and semifragile watermarks. The main requirement for such authentication systems is that minor modifications such as lossy compression which do not alter the content of the data preserve the authenticity of the data, whereas modifications which do modify the content render the data not authentic. These schemes can be classified into two main classes depending on the model of image authentication they are based on. One of the purposes of this paper is to look at some of the advantages and disadvantages of these image authentication schemes and their relationship with fundamental limitations of the underlying model of image authentication. In particular, we study feature-based algorithms which generate an authentication tag based on some inherent features in the image such as the location of edges. The main disadvantage of most proposed feature-based algorithms is that similar images generate similar features, and therefore it is possible for a forger to generate dissimilar images that have the same features. On the other hand, the class of hash-based algorithms utilizes a cryptographic hash function or a digital signature scheme to reduce the data and generate an authentication tag. It inherits the security of digital signatures to thwart forgery attacks. The main disadvantage of hash-based algorithms is that the image needs to be modified in order to be made authenticatable. The amount of modification is on the order of the noise the image can tolerate before it is rendered inauthentic. The other purpose of this paper is to propose a multimedia authentication scheme which combines some of the best features of both classes of algorithms. The proposed scheme utilizes cryptographic hash functions and digital signature schemes and the data does not need to be modified in order to be made authenticatable. Several applications including the authentication of images on CD-ROM and handwritten documents will be discussed.
26 CFR 1.1368-0 - Table of contents.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) INCOME TAXES Small Business Corporations and Their Shareholders § 1.1368-0 Table of contents. The... and profits. (d) S corporation with earnings and profits. (1) General treatment of distribution. (2... of distributions. (1) In general. (2) Election to distribute earnings and profits first. (i) In...
Reconfigurable assembly work station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yhu-Tin; Abell, Jeffrey A.; Spicer, John Patrick
A reconfigurable autonomous workstation includes a multi-faced superstructure including a horizontally-arranged frame section supported on a plurality of posts. The posts form a plurality of vertical faces arranged between adjacent pairs of the posts, the faces including first and second faces and a power distribution and position reference face. A controllable robotic arm suspends from the rectangular frame section, and a work table fixedly couples to the power distribution and position reference face. A plurality of conveyor tables are fixedly coupled to the work table including a first conveyor table through the first face and a second conveyor table throughmore » the second face. A vision system monitors the work table and each of the conveyor tables. A programmable controller monitors signal inputs from the vision system to identify and determine orientation of the component on the first conveyor table and control the robotic arm to execute an assembly task.« less
Distributed design approach in persistent identifiers systems
NASA Astrophysics Data System (ADS)
Golodoniuc, Pavel; Car, Nicholas; Klump, Jens
2017-04-01
The need to identify both digital and physical objects is ubiquitous in our society. Past and present persistent identifier (PID) systems, of which there is a great variety in terms of technical and social implementations, have evolved with the advent of the Internet, which has allowed for globally unique and globally resolvable identifiers. PID systems have catered for identifier uniqueness, integrity, persistence, and trustworthiness, regardless of the identifier's application domain, the scope of which has expanded significantly in the past two decades. Since many PID systems have been largely conceived and developed by small communities, or even a single organisation, they have faced challenges in gaining widespread adoption and, most importantly, the ability to survive change of technology. This has left a legacy of identifiers that still exist and are being used but which have lost their resolution service. We believe that one of the causes of once successful PID systems fading is their reliance on a centralised technical infrastructure or a governing authority. Golodoniuc et al. (2016) proposed an approach to the development of PID systems that combines the use of (a) the Handle system, as a distributed system for the registration and first-degree resolution of persistent identifiers, and (b) the PID Service (Golodoniuc et al., 2015), to enable fine-grained resolution to different information object representations. The proposed approach solved the problem of guaranteed first-degree resolution of identifiers, but left fine-grained resolution and information delivery under the control of a single authoritative source, posing risk to the long-term availability of information resources. Herein, we develop these approaches further and explore the potential of large-scale decentralisation at all levels: (i) persistent identifiers and information resources registration; (ii) identifier resolution; and (iii) data delivery. To achieve large-scale decentralisation, we propose using Distributed Hash Tables (DHT), Peer Exchange networks (PEX), Magnet Links, and peer-to-peer (P2P) file sharing networks - the technologies that enable applications such as BitTorrent (Wu et al., 2010). The proposed approach introduces reliable information replication and caching mechanisms, eliminating the need for a central PID data store, and increases overall system fault tolerance due to the lack of a single point of failure. The proposed PID system's design aims to ensure trustworthiness of the system and incorporates important aspects of governance, such as the notion of the authoritative source, data integrity, caching, and data replication control.
A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms
NASA Astrophysics Data System (ADS)
Hasbestan, Jaber J.; Senocak, Inanc
2017-12-01
Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.
NASA Technical Reports Server (NTRS)
Warren, Wayne H., Jr.; Adelman, Saul J.
1989-01-01
The machine-readable version of the multiplet table, as it is currently being distributed from the Astronomical Data Center, is described. The computerized version of the table contains data on excitation potentials, J values, multiplet terms, intensities of the transitions, and multiplet numbers. Files ordered by multiplet and by wavelength are included in the distributed version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
AISL-CRYPTO is a library of cryptography functions supporting other AISL software. It provides various crypto functions for Common Lisp, including Digital Signature Algorithm, Data Encryption Standard, Secure Hash Algorithm, and public-key cryptography.
Toofanny, Rudesh D; Simms, Andrew M; Beck, David A C; Daggett, Valerie
2011-08-10
Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008.
2011-01-01
Background Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008. PMID:21831299
Semi-Supervised Geographical Feature Detection
NASA Astrophysics Data System (ADS)
Yu, H.; Yu, L.; Kuo, K. S.
2016-12-01
Extraction and tracking geographical features is a fundamental requirement in many geoscience fields. However, this operation has become an increasingly challenging task for domain scientists when tackling a large amount of geoscience data. Although domain scientists may have a relatively clear definition of features, it is difficult to capture the presence of features in an accurate and efficient fashion. We propose a semi-supervised approach to address large geographical feature detection. Our approach has two main components. First, we represent a heterogeneous geoscience data in a unified high-dimensional space, which can facilitate us to evaluate the similarity of data points with respect to geolocation, time, and variable values. We characterize the data from these measures, and use a set of hash functions to parameterize the initial knowledge of the data. Second, for any user query, our approach can automatically extract the initial results based on the hash functions. To improve the accuracy of querying, our approach provides a visualization interface to display the querying results and allow users to interactively explore and refine them. The user feedback will be used to enhance our knowledge base in an iterative manner. In our implementation, we use high-performance computing techniques to accelerate the construction of hash functions. Our design facilitates a parallelization scheme for feature detection and extraction, which is a traditionally challenging problem for large-scale data. We evaluate our approach and demonstrate the effectiveness using both synthetic and real world datasets.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Ambient Particle Size Distributions F Table F-3 to Subpart F of Part 53 Protection of Environment... Ambient Particle Size Distributions Idealized Distribution Fine Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) Coarse Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) PM2.5/PM10 Ratio FRM Sampler...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Ambient Particle Size Distributions F Table F-3 to Subpart F of Part 53 Protection of Environment... Ambient Particle Size Distributions Idealized Distribution Fine Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m 3) Coarse Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m 3) PM 2.5/PM 10 Ratio FRM Sampler...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Ambient Particle Size Distributions F Table F-3 to Subpart F of Part 53 Protection of Environment... Ambient Particle Size Distributions Idealized Distribution Fine Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) Coarse Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) PM2.5/PM10 Ratio FRM Sampler...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Ambient Particle Size Distributions F Table F-3 to Subpart F of Part 53 Protection of Environment... Ambient Particle Size Distributions Idealized Distribution Fine Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) Coarse Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) PM2.5/PM10 Ratio FRM Sampler...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Ambient Particle Size Distributions F Table F-3 to Subpart F of Part 53 Protection of Environment... Ambient Particle Size Distributions Idealized Distribution Fine Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) Coarse Particle Mode MMD (µm) Geo. Std. Dev. Conc. (µg/m3) PM 2.5/PM 10 Ratio FRM Sampler...
Arkansas, 2009 forest inventory and analysis factsheet
James F. Rosson
2011-01-01
The summary includes estimates of forest land area (table 1), ownership (table 2), forest-type groups (table 3), volume (tables 4 and 5), biomass (tables 6 and 7), and pine plantation area (table 8) along with maps of Arkansasâ survey units (fig. 1), percent forest by county (fig. 2), and distribution of pine plantations (fig. 3). The estimates are presented by survey...
Gencrypt: one-way cryptographic hashes to detect overlapping individuals across samples
Turchin, Michael C.; Hirschhorn, Joel N.
2012-01-01
Summary: Meta-analysis across genome-wide association studies is a common approach for discovering genetic associations. However, in some meta-analysis efforts, individual-level data cannot be broadly shared by study investigators due to privacy and Institutional Review Board concerns. In such cases, researchers cannot confirm that each study represents a unique group of people, leading to potentially inflated test statistics and false positives. To resolve this problem, we created a software tool, Gencrypt, which utilizes a security protocol known as one-way cryptographic hashes to allow overlapping participants to be identified without sharing individual-level data. Availability: Gencrypt is freely available under the GNU general public license v3 at http://www.broadinstitute.org/software/gencrypt/ Contact: joelh@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22302573
NASA Astrophysics Data System (ADS)
Moon, Dukjae; Hong, Deukjo; Kwon, Daesung; Hong, Seokhie
We assume that the domain extender is the Merkle-Damgård (MD) scheme and he message is padded by a ‘1’, and minimum number of ‘0’s, followed by a fixed size length information so that the length of padded message is multiple of block length. Under this assumption, we analyze securities of the hash mode when the compression function follows the Davies-Meyer (DM) scheme and the underlying block cipher is one of the plain Feistel or Misty scheme or the generalized Feistel or Misty schemes with Substitution-Permutation (SP) round function. We do this work based on Meet-in-the-Middle (MitM) preimage attack techniques, and develop several useful initial structures.
NASA Astrophysics Data System (ADS)
Siswantyo, Sepha; Susanti, Bety Hayat
2016-02-01
Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.
2009-09-01
NUMBER(S) 12 . DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15...Implementing FAR Ethics Rules 10 Table 4: Contractor Responses on Practices Now Required by the FAR for Code of Business Ethics and Conduct 12 Table 5...Fiscal Year 2006 35 Table 12 : Required FAR Components for Contractor Ethics Program Practices 43 Figure Figure 1: DOD Hotline Posters Available
VizieR Online Data Catalog: Statistical test on binary stars non-coevality (Valle+, 2016)
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Valle, G.; Prada Moroni, P. G.; Degl'Innocenti, S.
2016-01-01
The table contains the W0.95 critical values, for the 1087 binary systems considered in the paper. Tha table also lists the parameters of the beta distributions approximating the empirical W distributions. (1 data file).
Building a Library for Microelectronics Verification with Topological Constraints
2017-03-01
Tables 1d, 3b); 1-bit full adder cell (Fig. 1), respectively. Table 5. Frequency distributions for the genus of logically equivalent circuit...Figure 1 shows that switching signal pairs produces logically- equivalent topologies of the 1-bit full adder cell with three values of the genus (g = 3 [1...case], 4, 5, 6). Figure 1. Frequency distribution for logically equivalent circuit topologies of the 1-bit full adder cell (2048) in Table 1(e
Tag Content Access Control with Identity-based Key Exchange
NASA Astrophysics Data System (ADS)
Yan, Liang; Rong, Chunming
2010-09-01
Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.
NASA Astrophysics Data System (ADS)
Yang, YuGuang; Zhang, YuChen; Xu, Gang; Chen, XiuBo; Zhou, Yi-Hua; Shi, WeiMin
2018-03-01
Li et al. first proposed a quantum hash function (QHF) in a quantum-walk architecture. In their scheme, two two-particle interactions, i.e., I interaction and π-phase interaction are introduced and the choice of I or π-phase interactions at each iteration depends on a message bit. In this paper, we propose an efficient QHF by dense coding of coin operators in discrete-time quantum walk. Compared with existing QHFs, our protocol has the following advantages: the efficiency of the QHF can be doubled and even more; only one particle is enough and two-particle interactions are unnecessary so that quantum resources are saved. It is a clue to apply the dense coding technique to quantum cryptographic protocols, especially to the applications with restricted quantum resources.
Tangible interactive system for document browsing and visualisation of multimedia data
NASA Astrophysics Data System (ADS)
Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry
2006-01-01
In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
The Development of Administrative Measures as Indicators of Soldier Effectiveness
1987-08-01
26 CONTENTS (Continued) Page LIST OF TABLES (Continued) Table 13. Frequency distributions for selected variables: MPRJ/ EMF comparison 28 14...8217,»j«’N-j.,’j’«>,’j.’«J.>>>Ji’v-.%j.*’j">J,X Table 13 Frequency Distributions for Selected Variables: (n = 650 soldiers) MPRJ/ EMF Comparioon...information were examined: (a) the Enlisted Master File ( EMF ), a central computer record of selected personnel actions; (b) the Official Military Personnel
26 CFR 1.1368-0 - Table of contents.
Code of Federal Regulations, 2014 CFR
2014-04-01
...) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-0 Table of contents... with no earnings and profits. (d) S corporation with earnings and profits. (1) General treatment of... relating to source of distributions. (1) In general. (2) Election to distribute earnings and profits first...
26 CFR 1.1368-0 - Table of contents.
Code of Federal Regulations, 2011 CFR
2011-04-01
...) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-0 Table of contents... with no earnings and profits. (d) S corporation with earnings and profits. (1) General treatment of... relating to source of distributions. (1) In general. (2) Election to distribute earnings and profits first...
26 CFR 1.1368-0 - Table of contents.
Code of Federal Regulations, 2012 CFR
2012-04-01
...) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-0 Table of contents... with no earnings and profits. (d) S corporation with earnings and profits. (1) General treatment of... relating to source of distributions. (1) In general. (2) Election to distribute earnings and profits first...
26 CFR 1.1368-0 - Table of contents.
Code of Federal Regulations, 2013 CFR
2013-04-01
...) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-0 Table of contents... with no earnings and profits. (d) S corporation with earnings and profits. (1) General treatment of... relating to source of distributions. (1) In general. (2) Election to distribute earnings and profits first...
Distribution Tables and Private Tests: The Failure of Middle School Reform in Japan.
ERIC Educational Resources Information Center
LeTendre, Gerald K.
1994-01-01
In November 1992, Japanese Ministry of Education declared middle school teachers could no longer use distribution tables produced by private testing companies to predetermine high school students' curricula. Failure to implement reform stems from structural and cultural roots. By presorting students and molding their expectations, traditional…
Aerospace Ground Equipment (AGE) AFSC 454X1
1992-01-01
AD-A250 280 - oUNITED S TA TES AIR FORCE OCC UPA TIONA L SUR VEY REPORT D DTICS ELECTE MAY151. AEROSPACE GROUND EQUIPMENT (AGE) AFSC 454X1 AFPT 90...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 92 5 14 019 DISTRIBUTION FOR AFSC 454X1 OSR AND SUPPORTING DOCUMENTS ANL TNG JOB OSR EXT EXT INV...DISTRIBUTION OF AFSC 454X1 PERSONNEL ... ....... 3 TABLE 2 - PAYGRADE DISTRIBUTION OF 454X1 SURVEY SAMPLE ... ....... 4 TABLE 3 - RELATIVE PERCENT TIME SPENT
Optimizing the inner loop of the gravitational force interaction on modern processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Michael S
2010-12-08
We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole
NASA Astrophysics Data System (ADS)
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-01
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
Efficient Management of Certificate Revocation Lists in Smart Grid Advanced Metering Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebe, Mumin; Akkaya, Kemal
Advanced Metering Infrastructure (AMI) forms a communication network for the collection of power data from smart meters in Smart Grid. As the communication within an AMI needs to be secure, key management becomes an issue due to overhead and limited resources. While using public-keys eliminate some of the overhead of key management, there is still challenges regarding certificates that store and certify the publickeys. In particular, distribution and storage of certificate revocation list (CRL) is major a challenge due to cost of distribution and storage in AMI networks which typically consist of wireless multi-hop networks. Motivated by the need ofmore » keeping the CRL distribution and storage cost effective and scalable, in this paper, we present a distributed CRL management model utilizing the idea of distributed hash trees (DHTs) from peer-to-peer (P2P) networks. The basic idea is to share the burden of storage of CRLs among all the smart meters by exploiting the meshing capability of the smart meters among each other. Thus, using DHTs not only reduces the space requirements for CRLs but also makes the CRL updates more convenient. We implemented this structure on ns-3 using IEEE 802.11s mesh standard as a model for AMI and demonstrated its superior performance with respect to traditional methods of CRL management through extensive simulations.« less
Asymmetric distances for binary embeddings.
Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana
2014-01-01
In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.
Handwriting: Feature Correlation Analysis for Biometric Hashes
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Steinmetz, Ralf
2004-12-01
In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.
A one-time pad color image cryptosystem based on SHA-3 and multiple chaotic systems
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Wang, Siwei; Zhang, Yingqian; Luo, Chao
2018-04-01
A novel image encryption algorithm is proposed that combines the SHA-3 hash function and two chaotic systems: the hyper-chaotic Lorenz and Chen systems. First, 384 bit keystream hash values are obtained by applying SHA-3 to plaintext. The sensitivity of the SHA-3 algorithm and chaotic systems ensures the effect of a one-time pad. Second, the color image is expanded into three-dimensional space. During permutation, it undergoes plane-plane displacements in the x, y and z dimensions. During diffusion, we use the adjacent pixel dataset and corresponding chaotic value to encrypt each pixel. Finally, the structure of alternating between permutation and diffusion is applied to enhance the level of security. Furthermore, we design techniques to improve the algorithm's encryption speed. Our experimental simulations show that the proposed cryptosystem achieves excellent encryption performance and can resist brute-force, statistical, and chosen-plaintext attacks.
A hybrid cloud read aligner based on MinHash and kmer voting that preserves privacy
NASA Astrophysics Data System (ADS)
Popic, Victoria; Batzoglou, Serafim
2017-05-01
Low-cost clouds can alleviate the compute and storage burden of the genome sequencing data explosion. However, moving personal genome data analysis to the cloud can raise serious privacy concerns. Here, we devise a method named Balaur, a privacy preserving read mapper for hybrid clouds based on locality sensitive hashing and kmer voting. Balaur can securely outsource a substantial fraction of the computation to the public cloud, while being highly competitive in accuracy and speed with non-private state-of-the-art read aligners on short read data. We also show that the method is significantly faster than the state of the art in long read mapping. Therefore, Balaur can enable institutions handling massive genomic data sets to shift part of their analysis to the cloud without sacrificing accuracy or exposing sensitive information to an untrusted third party.
A hybrid cloud read aligner based on MinHash and kmer voting that preserves privacy
Popic, Victoria; Batzoglou, Serafim
2017-01-01
Low-cost clouds can alleviate the compute and storage burden of the genome sequencing data explosion. However, moving personal genome data analysis to the cloud can raise serious privacy concerns. Here, we devise a method named Balaur, a privacy preserving read mapper for hybrid clouds based on locality sensitive hashing and kmer voting. Balaur can securely outsource a substantial fraction of the computation to the public cloud, while being highly competitive in accuracy and speed with non-private state-of-the-art read aligners on short read data. We also show that the method is significantly faster than the state of the art in long read mapping. Therefore, Balaur can enable institutions handling massive genomic data sets to shift part of their analysis to the cloud without sacrificing accuracy or exposing sensitive information to an untrusted third party. PMID:28508884
Distributed Storage Inverter and Legacy Generator Integration Plus Renewable Solution for Microgrids
2015-07-01
24 6.6 DEMONSTRATION 6: PV + STORAGE SUPPORT MANAGING VARIABLE SOLAR ...Table 2. Energy generated by solar PV for 1 month. .......................................................... 23 Table 3. NG generators energy...saving with solar PV . ........................................................ 24 Table 4. NG generators fuel saving with solar PV
Tables of Optimal Allocations of Observations for Comparing Treatments with a Control.
1981-01-01
variatevpp v ,p,P t-distribution (p-variate Itl-distribution) with d.f. v and equal corre- lations p = N /C (N ); tables of t are given by Krishnaiah and1 0 1...values of (tp,/) 2 tabulated for selected -d and 1-a by Krishnaiah and Armitage (1965). Entries for v = 60 1.. the tables of Hahn and Hendrickson (1971...with a Control," Multivariate Analysis - II (Ed. P.R. Krishnaiah ), New York: Academic Press, 463-473. Bechhofer, R.E. and Nocturne, D.J. (1972), "Optimal
NASA Astrophysics Data System (ADS)
Irving, D. H.; Rasheed, M.; O'Doherty, N.
2010-12-01
The efficient storage, retrieval and interactive use of subsurface data present great challenges in geodata management. Data volumes are typically massive, complex and poorly indexed with inadequate metadata. Derived geomodels and interpretations are often tightly bound in application-centric and proprietary formats; open standards for long-term stewardship are poorly developed. Consequently current data storage is a combination of: complex Logical Data Models (LDMs) based on file storage formats; 2D GIS tree-based indexing of spatial data; and translations of serialised memory-based storage techniques into disk-based storage. Whilst adequate for working at the mesoscale over a short timeframes, these approaches all possess technical and operational shortcomings: data model complexity; anisotropy of access; scalability to large and complex datasets; and weak implementation and integration of metadata. High performance hardware such as parallelised storage and Relational Database Management System (RDBMS) have long been exploited in many solutions but the underlying data structure must provide commensurate efficiencies to allow multi-user, multi-application and near-realtime data interaction. We present an open Spatially-Registered Data Structure (SRDS) built on Massively Parallel Processing (MPP) database architecture implemented by a ANSI SQL 2008 compliant RDBMS. We propose a LDM comprising a 3D Earth model that is decomposed such that each increasing Level of Detail (LoD) is achieved by recursively halving the bin size until it is less than the error in each spatial dimension for that data point. The value of an attribute at that point is stored as a property of that point and at that LoD. It is key to the numerical efficiency of the SRDS that it is under-pinned by a power-of-two relationship thus precluding the need for computationally intensive floating point arithmetic. Our approach employed a tightly clustered MPP array with small clusters of storage, processors and memory communicating over a high-speed network inter-connect. This is a shared-nothing architecture where resources are managed within each cluster unlike most other RDBMSs. Data are accessed on this architecture by their primary index values which utilises the hashing algorithm for point-to-point access. The hashing algorithm’s main role is the efficient distribution of data across the clusters based on the primary index. In this study we used 3D seismic volumes, 2D seismic profiles and borehole logs to demonstrate application in both (x,y,TWT) and (x,y,z)-space. In the SRDS the primary index is a composite column index of (x,y) to avoid invoking time-consuming full table scans as is the case in tree-based systems. This means that data access is isotropic. A query for data in a specified spatial range permits retrieval recursively by point-to-point queries within each nested LoD yielding true linear performance up to the Petabyte scale with hardware scaling presenting the primary limiting factor. Our architecture and LDM promotes: realtime interaction with massive data volumes; streaming of result sets and server-rendered 2D/3D imagery; rigorous workflow control and auditing; and in-database algorithms run directly against data as a HPC cloud service.
2013-08-22
Research Collective Olympia, Washington 98501 8 . PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND...298 (Rev. 8 -98) Prescribed by ANSI Std. Z39.18 THIS PAGE INTENTIONALLY LEFT BLANK i...aTable 8 : aTable 9: Table 10: Table 11: Table 12: Table 13: Summary of survey effort by day, November 2012 - March 2013
TSP Symposium 2012 Proceedings
2012-11-01
and Statistical Model 78 7.3 Analysis and Results 79 7.4 Threats to Validity and Limitations 85 7.5 Conclusions 86 7.6 Acknowledgments 87 7.7...Table 12: Overall Statistics of the Experiment 32 Table 13: Results of Pairwise ANOVA Analysis, Highlighting Statistically Significant Differences...we calculated the percentage of defects injected. The distribution statistics are shown in Table 2. Table 2: Mean Lower, Upper Confidence Interval
2007-09-26
From left: Data Parallel Line Relaxation (DPLR) software team members Kerry Trumble, Deepak Bose and David Hash analyze and predict the extreme environments NASA's space shuttle experiences during its super high-speed reentry into Earth’s atmosphere.
2017-05-22
angular velocity values Figure 33: Feasibility test Figure 34: Bellman’s Principle Figure 35: Bellman’s Principle validation Minimum Figure 36...Distribution of at test point for simulated ISR traffic Figure 48: PDFs of observed and ISR traffic Table 2: Adversary security states at test point #10...Figure 49: Hypothesis testing at test point #10 Figure 50: Distribution of for observed traffic Figure 51: Distribution of for ISR traffic Table 3
Practical comparison of distributed ledger technologies for IoT
NASA Astrophysics Data System (ADS)
Red, Val A.
2017-05-01
Existing distributed ledger implementations - specifically, several blockchain implementations - embody a cacophony of divergent capabilities augmenting innovations of cryptographic hashes, consensus mechanisms, and asymmetric cryptography in a wide variety of applications. Whether specifically designed for cryptocurrency or otherwise, several distributed ledgers rely upon modular mechanisms such as consensus or smart contracts. These components, however, can vary substantially among implementations; differences involving proof-of-work, practical byzantine fault tolerance, and other consensus approaches exemplify distinct distributed ledger variations. Such divergence results in unique combinations of modules, performance, latency, and fault tolerance. As implementations continue to develop rapidly due to the emerging nature of blockchain technologies, this paper encapsulates a snapshot of sensor and internet of things (IoT) specific implementations of blockchain as of the end of 2016. Several technical risks and divergent approaches preclude standardization of a blockchain for sensors and IoT in the foreseeable future; such issues will be assessed alongside the practicality of IoT applications among Hyperledger, Iota, and Ethereum distributed ledger implementations suggested for IoT. This paper contributes a comparison of existing distributed ledger implementations intended for practical sensor and IoT utilization. A baseline for characterizing distributed ledger implementations in the context of IoT and sensors is proposed. Technical approaches and performance are compared considering IoT size, weight, and power limitations. Consensus and smart contracts, if applied, are also analyzed for the respective implementations' practicality and security. Overall, the maturity of distributed ledgers with respect to sensor and IoT applicability will be analyzed for enterprise interoperability.
Real Time Cockpit Resource Management (CRM) Training
2010-10-01
to post-test. Table 4 Learning Scores for the Five Spiral 1 Classes Spiral 1 Class Pilots Sensors Pretest Posttest Difference Pretest Posttest ...results from the five Spiral 1 classes. Table 6 Pretest / Posttest Gain Scores Associated with Each Learning Test Item Test Item Class Item...SMALL BUSINESS INNOVATION RESEARCH (SBIR) PHASE II REPORT. Distribution A: Approved for public release; distribution unlimited. (Approval given
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-05
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
Parallel file system with metadata distributed across partitioned key-value store c
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron
2017-09-19
Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).
NASA Astrophysics Data System (ADS)
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V-polarization at the instrument view angles of nadir to 17 degrees (for DPR) and 48 & 53 degrees off nadir (for GMI). The GPM DSD Working Group is generating integral tables with GV observed DSD correlations and is performing sensitivity and verification tests. One advantage of keeping scattering tables separate from integral tables is that research can progress on the electromagnetic scattering of particles independent of cloud microphysics research. Another advantage of keeping the tables separate is that multiple scattering tables will be needed for frozen precipitation. Scattering tables are being developed for individual frozen particles based on habit, density and operating frequency. And a third advantage of keeping scattering and integral tables separate is that this framework provides an opportunity to communicate GV findings about DSD correlations into integral tables, and thus, into satellite algorithms.
Statistical analysis tables for truncated or censored samples
NASA Technical Reports Server (NTRS)
Cohen, A. C.; Cooley, C. G.
1971-01-01
Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.
Running Clubs--A Combinatorial Investigation.
ERIC Educational Resources Information Center
Nissen, Phillip; Taylor, John
1991-01-01
Presented is a combinatorial problem based on the Hash House Harriers rule which states that the route of the run should not have previously been traversed by the club. Discovered is how many weeks the club can meet before the rule has to be broken. (KR)
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
InChIKey collision resistance: an experimental testing
2012-01-01
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications. We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body. From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations. PMID:23256896
InChIKey collision resistance: an experimental testing.
Pletnev, Igor; Erin, Andrey; McNaught, Alan; Blinov, Kirill; Tchekhovskoi, Dmitrii; Heller, Steve
2012-12-20
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications.We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body.From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations.
Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis.
Lee, Junghye; Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian
2018-04-13
There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. ©Junghye Lee, Jimeng Sun, Fei Wang, Shuang Wang, Chi-Hyuck Jun, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.
Adamczak, Rafal; Meller, Jarek
2016-12-28
Advances in computing have enabled current protein and RNA structure prediction and molecular simulation methods to dramatically increase their sampling of conformational spaces. The quickly growing number of experimentally resolved structures, and databases such as the Protein Data Bank, also implies large scale structural similarity analyses to retrieve and classify macromolecular data. Consequently, the computational cost of structure comparison and clustering for large sets of macromolecular structures has become a bottleneck that necessitates further algorithmic improvements and development of efficient software solutions. uQlust is a versatile and easy-to-use tool for ultrafast ranking and clustering of macromolecular structures. uQlust makes use of structural profiles of proteins and nucleic acids, while combining a linear-time algorithm for implicit comparison of all pairs of models with profile hashing to enable efficient clustering of large data sets with a low memory footprint. In addition to ranking and clustering of large sets of models of the same protein or RNA molecule, uQlust can also be used in conjunction with fragment-based profiles in order to cluster structures of arbitrary length. For example, hierarchical clustering of the entire PDB using profile hashing can be performed on a typical laptop, thus opening an avenue for structural explorations previously limited to dedicated resources. The uQlust package is freely available under the GNU General Public License at https://github.com/uQlust . uQlust represents a drastic reduction in the computational complexity and memory requirements with respect to existing clustering and model quality assessment methods for macromolecular structure analysis, while yielding results on par with traditional approaches for both proteins and RNAs.
Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis
Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian
2018-01-01
Background There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. Objective The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. Methods We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. Results We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. Conclusions The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. PMID:29653917
Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing
Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin
2016-01-01
A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410
Shark: Fast Data Analysis Using Coarse-grained Distributed Memory
2013-05-01
Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 7.1.1 Java Objects...often MySQL or Derby) with a namespace for tables, table metadata, and par- tition information. Table data is stored in an HDFS directory, while a...saving time and space for large data sets. This is achieved with support for custom SerDe (serialization/deserialization) java interface implementations
2015-03-12
26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to
76 FR 11433 - Federal Transition To Secure Hash Algorithm (SHA)-256
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... generating digital signatures. Current information systems, Web servers, applications and workstation operating systems were designed to process, and use SHA-1 generated signatures. National Institute of... cryptographic keys, and more robust algorithms by December 2013. Government systems may begin to encounter...
Semantic Segmentation of Building Elements Using Point Cloud Hashing
NASA Astrophysics Data System (ADS)
Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.
2018-05-01
For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2017-05-01
This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClanahan, Richard; De Leon, Phillip L.
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
McClanahan, Richard; De Leon, Phillip L.
2014-08-20
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
Sand ridges off Sarasota, Florida: A complex facies boundary on a low-energy inner shelf environment
Twichell, D.; Brooks, Gillian L.; Gelfenbaum, G.; Paskevich, V.; Donahue, Brian
2003-01-01
The innermost shelf off Sarasota, Florida was mapped using sidescan-sonar imagery, seismic-reflection profiles, surface sediment samples, and short cores to define the transition between an onshore siliciclastic sand province and an offshore carbonate province and to identify the processes controlling the distribution of these distinctive facies. The transition between these facies is abrupt and closely tied to the morphology of the inner shelf. A series of low-relief nearly shore-normal ridges characterize the inner shelf. Stratigraphically, the ridges are separated from the underlying Pleistocene and Tertiary carbonate strata by the Holocene ravinement surface. While surficial sediment is fine to very-fine siliciclastic sand on the southeastern sides of the ridges and shell hash covers their northwestern sides, the cores of these Holocene deposits are a mixture of both of these facies. Along the southeastern edges of the ridges the facies boundary coincides with the discontinuity that separates the ridge deposits from the underlying strata. The transition from siliciclastic to carbonate sediment on the northwestern sides of the ridges is equally abrupt, but it falls along the crests of the ridges rather than at their edges. Here the facies transition lies within the Holocene deposit, and appears to be the result of sediment reworking by modern processes. This facies distribution primarily appears to result from south-flowing currents generated during winter storms that winnow the fine siliciclastic sediment from the troughs and steeper northwestern sides of the ridges. A coarse shell lag is left armoring the steeper northwestern sides of the ridges, and the fine sediment is deposited on the gentler southeastern sides of the ridges. This pronounced partitioning of the surficial sediment appears to be the result of the siliciclastic sand being winnowed and transported by these currents while the carbonate shell hash falls below the threshold of sediment movement and is left as a lag. The resulting facies boundaries on this low-energy, sediment-starved inner continental shelf are of two origins which both are tied to the remarkably subtle ridge morphology. Along the southeastern sides of the ridges the facies boundary coincides with a stratigraphic discontinuity that separates Holocene from the older deposits while the transition along the northwestern sides of the ridges is within the Holocene deposit and is the result of sediment redistribution by modern processes. ?? 2003 Elsevier B.V. All rights reserved.
Sediment Facies on a Steep Shoreface, Tairua/Pauanui Embayment, New Zealand
NASA Astrophysics Data System (ADS)
Trembanis, A. C.; Hume, T. M.; Gammisch, R. A.; Wright, L. D.; Green, M. O.
2001-05-01
Tairua/Pauanui embayment is a small headland-bound system on the Coromandel Peninsula on the east coast of the North Island of New Zealand. The shoreface in this area is steep ( ~0.85) and concave; however, where the profile is steepest, between 10-15-m water depth, the profile is slightly convex. A sedimentological study of the shoreface was conducted to provide baseline information for a sediment-dynamics study. Detailed swath mapping of the seabed sediment from the beach out to a water depth of ~50 m was conducted using side-scan sonar. Over 200 km of side-scan sonar data were collected by separate surveys in September 2000 and again in February 2001. Ground-truthing of side-scan sonar data was carried out by SCUBA, grab sampling ( ~100 samples) and drop-camera video. A digital terrain model (DTM) of the area was constructed using newly collected bathymetric data along with data from digitized nautical charts. The DTM exposes changes in bathymetry and variation in slope throughout the study area. The acoustic and sedimentologic data were used to identify and map 8 individual facies units. Shoreface facies distribution was found to be patchy and complex. Large-scale ( ~200-m wide x 1600-m long), slightly depressed, mega-rippled coarse-sand/shell-hash units were abruptly truncated by contacts with fine, featureless, continuous sand-cover units. The repeat survey in February indicated stability of the overall shape and location of large-scale facies units, while diver observations indicated that bedforms within units actively migrate. Bedform roughness is highly variable, including patchy reefs/rubble, sand-dollar fields mega-rippled coarse-gravel/sands, ripple scour depressions, and fields of dense tubeworms. The distribution of coarse shell-hash units is consistent with diabathic sediment transport. Three tripods supporting a range of instruments for measuring waves, currents, boundary-layer flows and sediment resuspension and settling were deployed on the shoreface during February 2001, for up to 3 months. Each tripod was situated on a different facies with a view to resolving spatial variability in sediment dynamics and establishing a link between spatially variable bed roughness, sediment mobility and sedimentation patterns. Our ultimate goal is to understand the interactions between substrate and driving flows in this spatially complex setting and how these interactions sculpt the shoreface and possibly control sediment transfers between the inner shelf and beach.
ARYANA: Aligning Reads by Yet Another Approach
2014-01-01
Motivation Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $106 prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. Contribution We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. Availability ARYANA with complete source code can be obtained from http://github.com/aryana-aligner PMID:25252881
ARYANA: Aligning Reads by Yet Another Approach.
Gholami, Milad; Arbabi, Aryan; Sharifi-Zarchi, Ali; Chitsaz, Hamidreza; Sadeghi, Mehdi
2014-01-01
Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $10(6) prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. ARYANA with complete source code can be obtained from http://github.com/aryana-aligner.
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
Integrating a local database into the StarView distributed user interface
NASA Technical Reports Server (NTRS)
Silberberg, D. P.
1992-01-01
A distributed user interface to the Space Telescope Data Archive and Distribution Service (DADS) known as StarView is being developed. The DADS architecture consists of the data archive as well as a relational database catalog describing the archive. StarView is a client/server system in which the user interface is the front-end client to the DADS catalog and archive servers. Users query the DADS catalog from the StarView interface. Query commands are transmitted via a network and evaluated by the database. The results are returned via the network and are displayed on StarView forms. Based on the results, users decide which data sets to retrieve from the DADS archive. Archive requests are packaged by StarView and sent to DADS, which returns the requested data sets to the users. The advantages of distributed client/server user interfaces over traditional one-machine systems are well known. Since users run software on machines separate from the database, the overall client response time is much faster. Also, since the server is free to process only database requests, the database response time is much faster. Disadvantages inherent in this architecture are slow overall database access time due to the network delays, lack of a 'get previous row' command, and that refinements of a previously issued query must be submitted to the database server, even though the domain of values have already been returned by the previous query. This architecture also does not allow users to cross correlate DADS catalog data with other catalogs. Clearly, a distributed user interface would be more powerful if it overcame these disadvantages. A local database is being integrated into StarView to overcome these disadvantages. When a query is made through a StarView form, which is often composed of fields from multiple tables, it is translated to an SQL query and issued to the DADS catalog. At the same time, a local database table is created to contain the resulting rows of the query. The returned rows are displayed on the form as well as inserted into the local database table. Identical results are produced by reissuing the query to either the DADS catalog or to the local table. Relational databases do not provide a 'get previous row' function because of the inherent complexity of retrieving previous rows of multiple-table joins. However, since this function is easily implemented on a single table, StarView uses the local table to retrieve the previous row. Also, StarView issues subsequent query refinements to the local table instead of the DADS catalog, eliminating the network transmission overhead. Finally, other catalogs can be imported into the local database for cross correlation with local tables. Overall, it is believe that this is a more powerful architecture for distributed, database user interfaces.
Listing of Available ACE Data Tables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conlin, Jeremy Lloyd
This document is divided into multiple sections. Section 2 lists some of the more frequently used ENDF/B reaction types that can be used with the FM input card. The remaining sections (described below) contain tables showing the available ACE data tables for various types of data. These ACE data libraries are distributed by the Radiation Safety Information Computational Center (RSICC) with MCNP6.
40 CFR 142.16 - Special primacy requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... enforceable requirements and procedures when the State adds to or changes the requirements under: (i) Table 1... distribution system that is out of compliance; (iii) Table 1 of 40 CFR 141.202(a) (Items (5), (6), and (9))—To... require a different form and manner of delivery for Tier 1, 2 and 3 public notices. (vi) Table 1 to 40 CFR...
Classification of Encrypted Web Traffic Using Machine Learning Algorithms
2013-06-01
DPI devices to block certain websites; Yu, Cong, Chen, and Lei [52] suggest hashing the domains of pornographic and illegal websites so ISPs can...Zhenming Lei. “Blocking pornographic , illegal websites by internet host domain using FPGA and Bloom Filter”. Network Infrastructure and Digital Content
Fault-Tolerant and Elastic Streaming MapReduce with Decentralized Coordination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumbhare, Alok; Frincu, Marc; Simmhan, Yogesh
2015-06-29
The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) extend this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that inturn leads to fluctuations in the Quality of the Service (QoS); and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence ofmore » resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2:8 improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 -1500 ms from multiple failures.« less
1991-09-01
Distribution system ... ......... 4 2. Architechture of an Expert system .. .............. 66 vi List of Tables Table Page 1. Prototype Component Model...expert system to properly process work requests Ln civil engineering (8:23). Electric Power Research Institute (EPRI). EPRI is a private organization ...used (51) Training Level. The level of training shop technicians receive, and the resulting proficiency, are important in all organizations . Experts 1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... before May 12, 2011. ADDRESSES: Written comments may be sent to: Chief, Computer Security Division... FURTHER INFORMATION CONTACT: Elaine Barker, Computer Security Division, National Institute of Standards... Quynh Dang, Computer Security Division, National Institute of Standards and Technology, Gaithersburg, MD...
Implementation and Use of the Reference Analytics Module of LibAnswers
ERIC Educational Resources Information Center
Flatley, Robert; Jensen, Robert Bruce
2012-01-01
Academic libraries have traditionally collected reference statistics using hash marks on paper. Although efficient and simple, this method is not an effective way to capture the complexity of reference transactions. Several electronic tools are now available to assist libraries with collecting often elusive reference data--among them homegrown…
The Impact of Water Table Drawdown and Drying on Subterranean Aquatic Fauna in In-Vitro Experiments
Stumpp, Christine; Hose, Grant C.
2013-01-01
The abstraction of groundwater is a global phenomenon that directly threatens groundwater ecosystems. Despite the global significance of this issue, the impact of groundwater abstraction and the lowering of groundwater tables on biota is poorly known. The aim of this study is to determine the impacts of groundwater drawdown in unconfined aquifers on the distribution of fauna close to the water table, and the tolerance of groundwater fauna to sediment drying once water levels have declined. A series of column experiments were conducted to investigate the depth distribution of different stygofauna (Syncarida and Copepoda) under saturated conditions and after fast and slow water table declines. Further, the survival of stygofauna under conditions of reduced sediment water content was tested. The distribution and response of stygofauna to water drawdown was taxon specific, but with the common response of some fauna being stranded by water level decline. So too, the survival of stygofauna under different levels of sediment saturation was variable. Syncarida were better able to tolerate drying conditions than the Copepoda, but mortality of all groups increased with decreasing sediment water content. The results of this work provide new understanding of the response of fauna to water table drawdown. Such improved understanding is necessary for sustainable use of groundwater, and allows for targeted strategies to better manage groundwater abstraction and maintain groundwater biodiversity. PMID:24278111
Zhaohua Dai; Carl Trettin; Changsheng Li; Devendra M. Amatya; Ge Sun; Harbin Li
2010-01-01
A physically based distributed hydrological model, MIKE SHE, was used to evaluate the effects of altered temperature and precipitation regimes on the streamflow and water table in a forested watershed on the southeastern Atlantic coastal plain. The model calibration and validation against both streamflow and water table depth showed that the MIKE SHE was applicable for...
Buffer zones for commodity and food handling structural applications are distributed across numerous tables. This document provides directions for determining the factors to use to identify the correct table for a given application.
Generating constrained randomized sequences: item frequency matters.
French, Robert M; Perruchet, Pierre
2009-11-01
All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.
1981-01-14
wet-bulb temperature depression versus dry -bulb temperature, means and standard deviations of d-j-bulb, wet-bulb (over) SDD, 1473 UNCLASS IF I ED FC...distribution tables Dry -bulb temperature versud wet-bulb temperature Cumulative percentage frequency of distribution tables 20. and dew point...PART 5 PRECIPITATION PSYCHROMETRIC.DRY VS WET BULB SNOWFALL MEAN & STO 0EV SNOW EPTH DRY BULB, WET BULB, &DEW POINtI RELATIVE HUMIDITY PARTC SURFACE
RAINDROP DISTRIBUTIONS AT MAJURO ATOLL, MARSHALL ISLANDS.
RAINDROPS, MARSHALL ISLANDS), (*ATMOSPHERIC PRECIPITATION, TROPICAL REGIONS), PARTICLE SIZE, SAMPLING, TABLES(DATA), WATER , ATTENUATION, DISTRIBUTION, VOLUME, RADAR REFLECTIONS, RAINFALL, PHOTOGRAPHIC ANALYSIS, COMPUTERS
Pilot Study Evaluating Nearshore Sediment Placement Sites, Noyo Harbor, CA
2013-02-01
distribution at the end of simulation (day 30). ............................. 33 Tables Table 1. NDBC, CDIP , and NOAA station locations...data are available from the National Data Buoy Center (NDBC, http://www.ndbc.noaa.gov) Buoy 46022 and Coastal Data Information Program ( CDIP , http...ft (1.8 m). Table 1 lists the NDBC, CDIP , and NOAA stations of interest and their location information. 2.4 Sediment characteristics A recent
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-4 Table F-4 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-4 Table F-4 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-6 Table F-6 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-6 Table F-6 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Spectral comparisons of sunlight and different lamps
NASA Technical Reports Server (NTRS)
Deitzer, Gerald
1994-01-01
The tables in this report were compiled to characterize the spectra of available lamp types and provide comparison to the spectra of sunlight. Table 1 reports the spectral distributions for various lamp sources and compares them to those measured for sunlight. Table 2 provides the amount of energy in Wm(exp -2) relative to the number of photons of PAR (photosynthetically active radiation) (400-700 nm) for each light source.
ERIC Educational Resources Information Center
Herrenden-Harker, B. D.
1997-01-01
Presents a modern Periodic Table based on the electron distribution in the outermost shell and the order of filling of the sublevels within the shells. Enables a student to read off directly the electronic configuration of the element and the order in which filling occurs. (JRH)
Resource-Efficient Data-Intensive System Designs for High Performance and Capacity
2015-09-01
76, 79, 80, and 81.] [9] Anirudh Badam, KyoungSoo Park, Vivek S. Pai, and Larry L. Peterson. HashCache: cache storage for the next billion. In Proc...Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows , Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: A
Two Improved Access Methods on Compact Binary (CB) Trees.
ERIC Educational Resources Information Center
Shishibori, Masami; Koyama, Masafumi; Okada, Makoto; Aoe, Jun-ichi
2000-01-01
Discusses information retrieval and the use of binary trees as a fast access method for search strategies such as hashing. Proposes new methods based on compact binary trees that provide faster access and more compact storage, explains the theoretical basis, and confirms the validity of the methods through empirical observations. (LRW)
2013-10-01
1_doc_RCData_612|virus|Trojan|rootkit|worm|Prolaco|rundll|msiexec|google|wmimngr|jusche d|wfmngr|wupmgr| java |wpmgr|nvscpapisvr)’ ” results in the...hashdump Dumps passwords hashes (LM/NTLM) from memory hibinfo Dump hibernation file information hivedump Prints out a hive hivelist Print list of
Code of Federal Regulations, 2011 CFR
2011-07-01
... machines. (3) All occupations involved in tankage or rendering of dead animals, animal offal, animal fats..., and hashing machines; and presses (except belly-rolling machines). Except, the provisions of this.... Rendering plants means establishments engaged in the conversion of dead animals, animal offal, animal fats...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-06
... hash algorithms in many computer network applications. On February 11, 2011, NIST published a notice in... Information Security Management Act (FISMA) of 2002 (Pub. L. 107-347), the Secretary of Commerce is authorized to approve Federal Information Processing Standards (FIPS). NIST activities to develop computer...
ERIC Educational Resources Information Center
Kelsey, Louise; Uytterhoeven, Lise
2017-01-01
This paper reports on a focused collaborative learning and teaching research project between the Dance Department at Middlesex University and partner institution London Studio Centre. Informed by Belinda Allen's research on creative curriculum design, dance students and lecturers shared innovative learning opportunities to enhance the development…
Manticore and CS mode : parallelizable encryption with joint cipher-state authentication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2004-10-01
We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof ofmore » security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.« less
A novel and lightweight system to secure wireless medical sensor networks.
He, Daojing; Chan, Sammy; Tang, Shaohua
2014-01-01
Wireless medical sensor networks (MSNs) are a key enabling technology in e-healthcare that allows the data of a patient's vital body parameters to be collected by the wearable or implantable biosensors. However, the security and privacy protection of the collected data is a major unsolved issue, with challenges coming from the stringent resource constraints of MSN devices, and the high demand for both security/privacy and practicality. In this paper, we propose a lightweight and secure system for MSNs. The system employs hash-chain based key updating mechanism and proxy-protected signature technique to achieve efficient secure transmission and fine-grained data access control. Furthermore, we extend the system to provide backward secrecy and privacy preservation. Our system only requires symmetric-key encryption/decryption and hash operations and is thus suitable for the low-power sensor nodes. This paper also reports the experimental results of the proposed system in a network of resource-limited motes and laptop PCs, which show its efficiency in practice. To the best of our knowledge, this is the first secure data transmission and access control system for MSNs until now.
A Key Establishment Protocol for RFID User in IPTV Environment
NASA Astrophysics Data System (ADS)
Jeong, Yoon-Su; Kim, Yong-Tae; Sohn, Jae-Min; Park, Gil-Cheol; Lee, Sang-Ho
In recent years, the usage of IPTV (Internet Protocol Television) has been increased. The reason is a technological convergence of broadcasting and telecommunication delivering interactive applications and multimedia content through high speed Internet connections. The main critical point of IPTV security requirements is subscriber authentication. That is, IPTV service should have the capability to identify the subscribers to prohibit illegal access. Currently, IPTV service does not provide a sound authentication mechanism to verify the identity of its wireless users (or devices). This paper focuses on a lightweight authentication and key establishment protocol based on the use of hash functions. The proposed approach provides effective authentication for a mobile user with a RFID tag whose authentication information is communicated back and forth with the IPTV authentication server via IPTV set-top box (STB). That is, the proposed protocol generates user's authentication information that is a bundle of two public keys derived from hashing user's private keys and RFID tag's session identifier, and adds 1bit to this bundled information for subscriber's information confidentiality before passing it to the authentication server.
Wu, Zhenyu; Zou, Ming
2014-10-01
An increasing number of users interact, collaborate, and share information through social networks. Unprecedented growth in social networks is generating a significant amount of unstructured social data. From such data, distilling communities where users have common interests and tracking variations of users' interests over time are important research tracks in fields such as opinion mining, trend prediction, and personalized services. However, these tasks are extremely difficult considering the highly dynamic characteristics of the data. Existing community detection methods are time consuming, making it difficult to process data in real time. In this paper, dynamic unstructured data is modeled as a stream. Tag assignments stream clustering (TASC), an incremental scalable community detection method, is proposed based on locality-sensitive hashing. Both tags and latent interactions among users are incorporated in the method. In our experiments, the social dynamic behaviors of users are first analyzed. The proposed TASC method is then compared with state-of-the-art clustering methods such as StreamKmeans and incremental k-clique; results indicate that TASC can detect communities more efficiently and effectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
SnapDock—template-based docking by Geometric Hashing
Estrin, Michael; Wolfson, Haim J.
2017-01-01
Abstract Motivation: A highly efficient template-based protein–protein docking algorithm, nicknamed SnapDock, is presented. It employs a Geometric Hashing-based structural alignment scheme to align the target proteins to the interfaces of non-redundant protein–protein interface libraries. Docking of a pair of proteins utilizing the 22 600 interface PIFACE library is performed in < 2 min on the average. A flexible version of the algorithm allowing hinge motion in one of the proteins is presented as well. Results: To evaluate the performance of the algorithm a blind re-modelling of 3547 PDB complexes, which have been uploaded after the PIFACE publication has been performed with success ratio of about 35%. Interestingly, a similar experiment with the template free PatchDock docking algorithm yielded a success rate of about 23% with roughly 1/3 of the solutions different from those of SnapDock. Consequently, the combination of the two methods gave a 42% success ratio. Availability and implementation: A web server of the application is under development. Contact: michaelestrin@gmail.com or wolfson@tau.ac.il PMID:28881968
Quantum Authencryption with Two-Photon Entangled States for Off-Line Communicants
NASA Astrophysics Data System (ADS)
Ye, Tian-Yu
2016-02-01
In this paper, a quantum authencryption protocol is proposed by using the two-photon entangled states as the quantum resource. Two communicants Alice and Bob share two private keys in advance, which determine the generation of two-photon entangled states. The sender Alice sends the two-photon entangled state sequence encoded with her classical bits to the receiver Bob in the manner of one-step quantum transmission. Upon receiving the encoded quantum state sequence, Bob decodes out Alice's classical bits with the two-photon joint measurements and authenticates the integrity of Alice's secret with the help of one-way hash function. The proposed protocol only uses the one-step quantum transmission and needs neither a public discussion nor a trusted third party. As a result, the proposed protocol can be adapted to the case where the receiver is off-line, such as the quantum E-mail systems. Moreover, the proposed protocol provides the message authentication to one bit level with the help of one-way hash function and has an information-theoretical efficiency equal to 100 %.
Modelling Seasonally Freezing Ground Conditions
1989-05-01
used as the ’snow input’ in the larger hydrological models, e.g. Pangburn (1987). The most advanced index model is Anderson’s (1973) model. This bases...source as the soils) is shown in figures 32 and 33. Table 10 shows the percentage areas of Hydrologic Soil Groups, Land Use and Slope Distribution for...C") z c~cu CYa) 65 table 10: Percentage areas of Hydrologic Soil Grouos, Land Use and Slope Distribution over W3 (?Pn!ke e: al., 1978) Parameter
Parallel grid library for rapid and flexible simulation development
NASA Astrophysics Data System (ADS)
Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.
2013-04-01
We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.
Army Battlefield Distribution Through the Lens of OIF: Logical Failures and the Way Ahead
2005-02-02
3 Historical Context of Logistics and Distribution Management Transformation...THEATER DISTRIBUTION UNITS ............................................... 66 iii TABLE OF FIGURES Figure 1. Distribution Management Center...consumer and a potential provider of logistics.8 Historical Context of Logistics and Distribution Management Transformation The critical role of
NASA Astrophysics Data System (ADS)
Xu, Ling; Zhao, Zhiwen
2017-08-01
A new quantum protocol with the assistance of a semi-honest third party (TP) is proposed, which allows the participants comparing the equality of their private information without disclosing them. Different from previous protocols, this protocol utilizes quantum key distribution against the collective-dephasing noise and the collective-rotation noise, which is more robust and abandons few samples, to transmit the classical information. In addition, this protocol utilizes the GHZ-like state and the χ + state to produce the entanglement swapping. And the Bell basis and the dual basis are used to measure the particle pair so that 3 bits of each participant's private information can be compared in each comparison time, which is more efficient and consumes fewer comparison times. Meanwhile, there is no need of unitary operation and hash function in this protocol. At the end, various kinds of outside attack and participant attack are discussed and analyzed to be invalid, so it can complete the comparison in security.
NASA Astrophysics Data System (ADS)
Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram
2010-05-01
Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.
Barlow, P.; Wagner, B.; Belitz, K.
1995-01-01
Continued agricultural productivity in the western San Joaquin Valley, California, is threatened by the presence of shallow, poor-quality groundwater that can cause soil salinization. We evaluate the management alternative of using groundwater pumping to control the altitude of the water table and provide irrigation water requirements. A transient, three-dimensional, groundwater flow model was linked with nonlinear optimization to simulate management alternatives for the groundwater flow system. Optimal pumping strategies have been determined that substantially reduce the area subject to a shallow water table and bare-soil evaporation (that is, areas with a water table within 2.1 m of land surface) and the rate of drainflow to on-farm drainage systems. Optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
Ergonomically Adjustable School Furniture for Male Students
ERIC Educational Resources Information Center
Al-Saleh, Khalid S.; Ramadan, Mohamed Z.; Al-Ashaikh, Riyad A.
2013-01-01
The need for adjustability in school furniture, in order to accommodate the variation in anthropometric measures of different genders, cultures and ages is becoming increasingly important. Four chair-table combinations, different in dimensions, with adjustable chair seating heights and table heights were designed, manufactured and distributed to…
Supply and Characteristics of Selected Health Personnel.
ERIC Educational Resources Information Center
Ake, James N.; Johnson, Donald W.
Detailed statistics on trends in the U.S. supply and geographic distribution of personnel in eight health occupations, along with current data on selected professional characteristics, are presented. Statistical tables include combined data for the eight occupations, and groups of tables for the individual health occupations: physicians (both…
[Hazard function and life table: an introduction to the failure time analysis].
Matsushita, K; Inaba, H
1987-04-01
Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.
NASA Astrophysics Data System (ADS)
Mclaughlin, D. L.; Kaplan, D. A.; Cohen, M. J.
2013-12-01
Recent rulings by the U.S. Supreme Court have limited federal protection over isolated wetlands, requiring documentation of a 'significant nexus' to a navigable water body to ensure federal jurisdiction. Despite geographic isolation, isolated wetlands influence the surficial aquifer dynamics that regulate baseflow to surface water systems. Due to differences in specific yield (Sy) between upland soils and inundated wetlands, responses of the upland water table to atmospheric fluxes (precipitation, P, and evapotranspiration, ET) are amplified relative to wetland water levels, leading to reversals in the hydraulic gradient between the two systems. As such, wetlands act as a water sink during wet cycles (via wetland exfiltration) and a source (via infiltration) during drier times, regulating both the surficial aquifer and its baseflow to downstream systems. To explore the importance of this wetland function at the landscape scale, we integrated models of soil moisture, upland water table, and wetland stage to simulate the hydrology of a low-relief, depressional landscape. We quantified the hydrologic buffering effect of wetlands by calculating the relative change in the standard deviation (SD) of water table elevation between model runs with and without wetlands. Using this model we explored the effects wetland area and spatial distribution over a range of climatic drivers (P and ET) and soil types. Increasing wetland cumulative area and/or density reduced water table variability relative to landscapes without wetlands, supporting the idea that wetlands stabilize regional hydrologic variation, but also increased mean water table depth because of sustained high ET rates in wetlands during dry periods. Maintaining high cumulative wetland area, but with fewer wetlands, markedly reduced the effect of wetland area, highlighting the importance of small, distributed wetlands on water table regulation. Simulating a range of climate scenarios suggested that the capacity of wetlands to buffer water table variation is most pronounced along a 'sweet spot' where P and ET are relatively balanced. High P and low ET yielded consistently high water tables with wetlands acting predominantly as sinks (i.e., little switching behavior), while low P and high ET scenarios limited wetland inundation. On the other hand, when both P and ET were moderate, the SD of the regional water table was reduced by nearly 50% for landscapes with 30% wetland area distributed over ~1 ha watersheds. Additionally, we found these buffering effects to be stronger in coarser soils compared with finer soils. Considering the strong influence of regional water table on downstream surface water systems, loss of isolated wetland area or mitigation of this loss at the expense of wetland density (i.e., large mitigation banks to replace small distributed systems) has the potential to significantly impact downstream water bodies. Isolated wetlands buffer surficial aquifer dynamics by providing water storage capacitance at the landscape scale and ultimately exert hydraulic regulation of regional surface waters through an indirect, but significant nexus.
Shark: SQL and Analytics with Cost-Based Query Optimization on Coarse-Grained Distributed Memory
2014-01-13
RDBMS and contains a database (often MySQL or Derby) with a namespace for tables, table metadata and partition information. Table data is stored in an...serialization/deserialization) Java interface implementations with corresponding object inspectors. The Hive driver controls the processing of queries, coordinat...native API, RDD operations are invoked through a functional interface similar to DryadLINQ [32] in Scala, Java or Python. For example, the Scala code for
Research and Development of Wound Dressing in Maxillofacial Trauma.
1983-03-14
Lidocaine 7 Table 4 PVP-1 2 (BASF 17/12) Microcapsule Size Distribution 8 Table 5 Processing Summary of PVP-1 2 (BASF 17/12) Microencapsulation 9 Table...benzalkonium chloride (Maquat LC-12S) was also incorporated into fabrics and powders. The povidone iodine (BASF 17/12) was microencapsulated using the... microcapsules con- taining povidone iodine. At 30% polymer, 70% of the product was between :- 212-600 microns. This material gave in vitro release of
Using the Hill Cipher to Teach Cryptographic Principles
ERIC Educational Resources Information Center
McAndrew, Alasdair
2008-01-01
The Hill cipher is the simplest example of a "block cipher," which takes a block of plaintext as input, and returns a block of ciphertext as output. Although it is insecure by modern standards, its simplicity means that it is well suited for the teaching of such concepts as encryption modes, and properties of cryptographic hash functions. Although…
1948-06-25
fluctuation According to this method the noise figure of a radio noise, generated either in the receiving sys - receiver is a quotient of the ratio of available...Ibmaity @" nabe ) adal 38. So"&UJ Malabo u~aIS. 5 a"), il61 Sudde io o ,disturbances (OLD). 6; absorption elects. 111. &e d"a hashed ~ ~ ~ ~ ~~( Saim. kuiy1
2015-09-01
changing the weight file used without redeploying the application. 2.1 Mobile Device We used the same Sprint-brand Galaxy S3 smart phone. The... Galaxy S3 line of smart phones varied in its technical specifications depending on the carrier. For reference, the Sprint-brand Galaxy S3 has the
A Proposal for Kelly CriterionBased Lossy Network Compression
2016-03-01
warehousing and data mining techniques for cyber security. New York (NY): Springer; 2007. p. 83–108. 34. Münz G, Li S, Carle G. Traffic anomaly...p. 188–196. 48. Kim NU, Park MW, Park SH, Jung SM, Eom JH, Chung TM. A study on ef- fective hash-based load balancing scheme for parallel nids. In
New Media Literacy Education (NMLE): A Developmental Approach
ERIC Educational Resources Information Center
Graber, Diana
2012-01-01
The digital world is full of both possibility and peril, with rules of engagement being hashed out as we go. While schools are still "hesitant to embrace new technologies as a backlash from the significant, and largely ineffectual, investment in classroom computers as an instructional panacea during in the mid-1990's" (Collins and Halverson 2009),…
2008-03-01
WVD Wigner - Ville Distribution xiv THIS PAGE INTENTIONALLY LEFT BLANK xv ACKNOWLEDGMENTS Many thanks to David Caliga of SRC Computer for his...11 2. Wigner - Ville Distribution .................................................................11 3. Choi-Williams... Ville Distribution ...................................12 Table 3. C Code Output for Wigner - Ville Distribution
26 CFR 1.707-0 - Table of contents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Rules Applicable to Guaranteed Payments, Preferred Returns, Operating Cash Flow Distributions, and...) Presumption regarding operating cash flow distributions. (1) In general. (2) Operating cash flow distributions. (i) In general. (ii) Operating cash flow safe harbor. (iii) Tiered partnerships. (c) Accumulation of...
26 CFR 1.707-0 - Table of contents.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Rules Applicable to Guaranteed Payments, Preferred Returns, Operating Cash Flow Distributions, and...) Presumption regarding operating cash flow distributions. (1) In general. (2) Operating cash flow distributions. (i) In general. (ii) Operating cash flow safe harbor. (iii) Tiered partnerships. (c) Accumulation of...
26 CFR 1.707-0 - Table of contents.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Guaranteed Payments, Preferred Returns, Operating Cash Flow Distributions, and Reimbursements of Preformation... operating cash flow distributions. (1) In general. (2) Operating cash flow distributions. (i) In general. (ii) Operating cash flow safe harbor. (iii) Tiered partnerships. (c) Accumulation of guaranteed...
26 CFR 1.707-0 - Table of contents.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Rules Applicable to Guaranteed Payments, Preferred Returns, Operating Cash Flow Distributions, and...) Presumption regarding operating cash flow distributions. (1) In general. (2) Operating cash flow distributions. (i) In general. (ii) Operating cash flow safe harbor. (iii) Tiered partnerships. (c) Accumulation of...
26 CFR 1.707-0 - Table of contents.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Rules Applicable to Guaranteed Payments, Preferred Returns, Operating Cash Flow Distributions, and...) Presumption regarding operating cash flow distributions. (1) In general. (2) Operating cash flow distributions. (i) In general. (ii) Operating cash flow safe harbor. (iii) Tiered partnerships. (c) Accumulation of...
2011-09-01
process). Beyond this project, it will be useful to determine what access structures are meaningful and take the processing cost into account as well. In...14 Table 9 Costs incurred by various approaches...28 Table 10 Costs incurred by various approaches for Top K Plans
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-24
... Conservation Program: Energy Conservation Standards for Distribution Transformers; Correction AGENCY: Office of... standards for distribution transformers. It was recently discovered that values in certain tables of the...,'' including distribution transformers. The Energy Policy Act of 1992 (EPACT 1992), Public Law 102-486, amended...
Cardiorespiratory fitness of a Brazilian regional sample distributed in different tables.
Belli, Karlyse Claudino; Callegaro, Carine C; Calegaro, Carine; Richter, Cleusa Maria; Klafke, Jonatas Zeni; Stein, Ricardo; Viecili, Paulo Ricardo Nazario
2012-09-01
Most classification tables of cardiorespiratory fitness (CRF) used in clinical practice are international and have not been validated for the Brazilian population. That can result in important discrepancies when that classification is extrapolated to our population. To assess the use of major CRF tables available in a Brazilian population sample of the Central High Plan of the state of Rio Grande do Sul (RS). This study assessed the retrospective data of 2,930 individuals, living in 36 cities of the Central High Plan of the state of RS, and considered the following: presence of risk factors for cardiovascular disease and estimated maximum oxygen consumption (VO2peak) values obtained through exercise test with Bruce protocol. To classify CRF, the individuals were distributed according to sex, inserted in their respective age groups in the Cooper, American Heart Association (AHA) and Universidade Federal de São Paulo (Unifesp) tables, and classified according to their VO2peak. Women had lower VO2peak values as compared with those of men (23.5 ± 8.5 vs. 31.7 ± 10.8 mL.kg-1.min-1, p < 0.001). Considering both sexes, VO2peak showed an inverse and moderate correlation with age (R = -0.48, p < 0.001). An important discrepancy in the CRF classification levels was observed between the tables, ranging from 49% (Cooper x AHA) to 75% (Unifesp x AHA). Our findings indicate important discrepancy in the CRF classification levels of the tables assessed. Future studies could assess whether international tables could be used for the Brazilian population and populations of different regions of Brazil.
NASA Astrophysics Data System (ADS)
Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun
2006-10-01
With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Powder Bed Layer Characteristics: The Overseen First-Order Process Input
NASA Astrophysics Data System (ADS)
Mindt, H. W.; Megahed, M.; Lavery, N. P.; Holmes, M. A.; Brown, S. G. R.
2016-08-01
Powder Bed Additive Manufacturing offers unique advantages in terms of manufacturing cost, lot size, and product complexity compared to traditional processes such as casting, where a minimum lot size is mandatory to achieve economic competitiveness. Many studies—both experimental and numerical—are dedicated to the analysis of how process parameters such as heat source power, scan speed, and scan strategy affect the final material properties. Apart from the general urge to increase the build rate using thicker powder layers, the coating process and how the powder is distributed on the processing table has received very little attention to date. This paper focuses on the first step of every powder bed build process: Coating the process table. A numerical study is performed to investigate how powder is transferred from the source to the processing table. A solid coating blade is modeled to spread commercial Ti-6Al-4V powder. The resulting powder layer is analyzed statistically to determine the packing density and its variation across the processing table. The results are compared with literature reports using the so-called "rain" models. A parameter study is performed to identify the influence of process table displacement and wiper velocity on the powder distribution. The achieved packing density and how that affects subsequent heat source interaction with the powder bed is also investigated numerically.
Statistical Handbook on Aging Americans. 1994 Edition. Statistical Handbook Series Number 5.
ERIC Educational Resources Information Center
Schick, Frank L., Ed.; Schick, Renee, Ed.
This statistical handbook contains 378 tables and charts illustrating the changes in the United States' aging population based on data collected during the 1990 census and several other surveys. The tables and charts are organized by topic as follows: demographics (age and sex distribution, life expectancy, race and ethnicity, geographic…
Federal Expenditures by State for Fiscal Year 1989.
ERIC Educational Resources Information Center
Bureau of the Census (DOC), Washington, DC. Governments Div.
Twelve data tables are presented covering federal grants, salaries and wages, direct payments to individuals, procurement contracts, and other direct federal expenditures, by state and by territory, for fiscal year 1989 (FY89), with some summary trend data. The tables provide the following data for FY89, respectively: (1) summary distribution of…
Tier One Performance Screen Initial Operational Test and Evaluation: 2011 Interim Report
2012-04-01
Alexandria, Virginia 22314 8 . PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) U. S. Army Research...DISTRIBUTION OF MOS IN THE FULL SCHOOLHOUSE DATA FILE ............ 8 TABLE 2.3. BACKGROUND AND DEMOGRAPHIC CHARACTERISTICS OF THE TOPS SAMPLES...SCHOOLHOUSE SAMPLE ....................................................................................................................B-6 TABLE B. 8
40 CFR Table 7 to Subpart Eeee of... - Initial Compliance With Work Practice Standards
Code of Federal Regulations, 2011 CFR
2011-07-01
... equivalent control that meets the requirements in Table 4 to this subpart, item 1.a i. After emptying and... out a leak detection and repair program or equivalent control according to one of the subparts listed... CATEGORIES National Emission Standards for Hazardous Air Pollutants: Organic Liquids Distribution (Non...
40 CFR Table 7 to Subpart Eeee of... - Initial Compliance With Work Practice Standards
Code of Federal Regulations, 2010 CFR
2010-07-01
... equivalent control that meets the requirements in Table 4 to this subpart, item 1.a i. After emptying and... out a leak detection and repair program or equivalent control according to one of the subparts listed... CATEGORIES National Emission Standards for Hazardous Air Pollutants: Organic Liquids Distribution (Non...
Bifulco, Paolo; Massa, Rita; Cesarelli, Mario; Romano, Maria; Fratini, Antonio; Gargiulo, Gaetano D; McEwan, Alistair L
2013-08-12
Electrosurgery units are widely employed in modern surgery. Advances in technology have enhanced the safety of these devices, nevertheless, accidental burns are still regularly reported. This study focuses on possible causes of sacral burns as complication of the use of electrosurgery. Burns are caused by local densifications of the current, but the actual pathway of current within patient's body is unknown. Numerical electromagnetic analysis can help in understanding the issue. To this aim, an accurate heterogeneous model of human body (including seventy-seven different tissues), electrosurgery electrodes, operating table and mattress was build to resemble a typical surgery condition. The patient lays supine on the mattress with the active electrode placed onto the thorax and the return electrode on his back. Common operating frequencies of electrosurgery units were considered. Finite Difference Time Domain electromagnetic analysis was carried out to compute the spatial distribution of current density within the patient's body. A differential analysis by changing the electrical properties of the operating table from a conductor to an insulator was also performed. Results revealed that distributed capacitive coupling between patient body and the conductive operating table offers an alternative path to the electrosurgery current. The patient's anatomy, the positioning and the different electromagnetic properties of tissues promote a densification of the current at the head and sacral region. In particular, high values of current density were located behind the sacral bone and beneath the skin. This did not occur in the case of non-conductive operating table. Results of the simulation highlight the role played from capacitive couplings between the return electrode and the conductive operating table. The concentration of current density may result in an undesired rise in temperature, originating burns in body region far from the electrodes. This outcome is concordant with the type of surgery-related sacral burns reported in literature. Such burns cannot be immediately detected after surgery, but appear later and can be confused with bedsores. In addition, the dosimetric analysis suggests that reducing the capacity coupling between the return electrode and the operating table can decrease or avoid this problem.
Barlow, P.M.; Wagner, B.J.; Belitz, K.
1996-01-01
The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hymel, Ross
The Public Key (PK) FPGA software performs asymmetric authentication using the 163-bit Elliptic Curve Digital Signature Algorithm (ECDSA) on an embedded FPGA platform. A digital signature is created on user-supplied data, and communication with a host system is performed via a Serial Peripheral Interface (SPI) bus. Software includes all components necessary for signing, including custom random number generator for key creation and SHA-256 for data hashing.
Performance Analysis of the Mobile IP Protocol (RFC 3344 and Related RFCS)
2006-12-01
Encapsulation HMAC Keyed-Hash Message Authentication Code ICMP Internet Control Message Protocol IEEE Institute of Electrical and Electronics Engineers IETF...Internet Engineering Task Force IOS Internetwork Operating System IP Internet Protocol ITU International Telecommunication Union LAN Local Area...network computing. Most organizations today have sophisticated networks that are connected to the Internet. The major benefit reaped from such a
An Intelligent Web-Based System for Diagnosing Student Learning Problems Using Concept Maps
ERIC Educational Resources Information Center
Acharya, Anal; Sinha, Devadatta
2017-01-01
The aim of this article is to propose a method for development of concept map in web-based environment for identifying concepts a student is deficient in after learning using traditional methods. Direct Hashing and Pruning algorithm was used to construct concept map. Redundancies within the concept map were removed to generate a learning sequence.…
Certificate Revocation Using Fine Grained Certificate Space Partitioning
NASA Astrophysics Data System (ADS)
Goyal, Vipul
A new certificate revocation system is presented. The basic idea is to divide the certificate space into several partitions, the number of partitions being dependent on the PKI environment. Each partition contains the status of a set of certificates. A partition may either expire or be renewed at the end of a time slot. This is done efficiently using hash chains.
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
Potency trends of delta9-THC and other cannabinoids in confiscated marijuana from 1980-1997.
ElSohly, M A; Ross, S A; Mehmedic, Z; Arafat, R; Yi, B; Banahan, B F
2000-01-01
The analysis of 35,312 cannabis preparations confiscated in the USA over a period of 18 years for delta-9-tetrahydrocannabinol (delta9-THC) and other major cannabinoids is reported. Samples were identified as cannabis, hashish, or hash oil. Cannabis samples were further subdivided into marijuana (loose material, kilobricks and buds), sinsemilla, Thai sticks and ditchweed. The data showed that more than 82% of all confiscated samples were in the marijuana category for every year except 1980 (61%) and 1981 (75%). The potency (concentration of delta9-THC) of marijuana samples rose from less than 1.5% in 1980 to approximately 3.3% in 1983 and 1984, then fluctuated around 3% till 1992. Since 1992, the potency of confiscated marijuana samples has continuously risen, going from 3.1% in 1992 to 4.2% in 1997. The average concentration of delta9-THC in all cannabis samples showed a gradual rise from 3% in 1991 to 4.47% in 1997. Hashish and hash oil, on the other hand, showed no specific potency trends. Other major cannabinoids [cannabidiol (CBD), cannabinol (CBN), and cannabichromene (CBC)] showed no significant change in their concentration over the years.
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi
2015-08-01
Radio Frequency Identification (RFID) based solutions are widely used for providing many healthcare applications include patient monitoring, object traceability, drug administration system and telecare medicine information system (TMIS) etc. In order to reduce malpractices and ensure patient privacy, in 2015, Srivastava et al. proposed a hash based RFID tag authentication protocol in TMIS. Their protocol uses lightweight hash operation and synchronized secret value shared between back-end server and tag, which is more secure and efficient than other related RFID authentication protocols. Unfortunately, in this paper, we demonstrate that Srivastava et al.'s tag authentication protocol has a serious security problem in that an adversary may use the stolen/lost reader to connect to the medical back-end server that store information associated with tagged objects and this privacy damage causing the adversary could reveal medical data obtained from stolen/lost readers in a malicious way. Therefore, we propose a secure and efficient RFID tag authentication protocol to overcome security flaws and improve the system efficiency. Compared with Srivastava et al.'s protocol, the proposed protocol not only inherits the advantages of Srivastava et al.'s authentication protocol for TMIS but also provides better security with high system efficiency.
10 CFR Appendix C to Subpart T of... - Certification Report for Distribution Transformers
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Certification Report for Distribution Transformers C..., App. C Appendix C to Subpart T of Part 431—Certification Report for Distribution Transformers All...: For Existing, New, or Modified Models: 1 Prepare tables that will list distribution transformer...
NASA Astrophysics Data System (ADS)
Jencso, K. G.; McGlynn, B. L.; Gooseff, M. N.; Wondzell, S. M.; Bencala, K. E.; Payn, R. A.
2007-12-01
Understanding how hillslope and riparian water table dynamics influence catchment scale hydrologic response remains a challenge. In steep headwater catchments with shallow soils, topographic convergence and divergence (upslope accumulated area-UAA) is a hypothesized first-order control on the distribution of soil water and groundwater. To test the relationship between UAA and the longevity of hillslope-riparian-stream shallow groundwater connectivity, we quantified water table continuity based on 80+ recording wells distributed across 24 hillslope-riparian-stream cross-sections. Cross-section upstream catchment areas ranged in size from 0.41 to 17.2 km2, within the Tenderfoot Creek Experimental Forest (U.S. Forest Service), northern Rocky Mountains, Montana, USA. We quantified toe-slope UAA and the topographic index (TI = ln a/tanβ) with a Multiple-D- Infinity (area routing in multiple infinite downslope directions) flow accumulation algorithm analysis of 1, 3, 10, and 30m ALSM derived DEMs. Indices derived from the 10m DEM best characterized subsurface flow accumulation, highlighting the balance between the process of interest, topographic complexity, and optimal grid scale representation. Across the 24 transects, toe-slope UAA ranged from 600-40,000 m2, the TI ranged from 5-16, and riparian widths were between 0-60m. Patterns in shallow groundwater table fluctuations suggest hydrologic dynamics reflective of hillslope-riparian landscape setting. Specifically, correlations were observed between longevity of hillslope-riparian water table continuity and the size of the UAA (r2=0.84) and its topographic index (r2=.86). These observations highlight the temporal component of topographic-hydrologic relationships important for understanding threshold mediated hydrologic variables. We are working to quantify the characteristics and spatial distribution of hillslope-riparian sequences and their water table dynamics to temporally link runoff source areas to whole catchment hydrologic response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silosky, M; Marsh, R
Purpose: Localizer projection radiographs acquired prior to CT scans are used to estimate patient size, affecting the function of Automatic Tube Current Modulation (ATCM) and hence CTDIvol and SSDE. Due to geometric effects, the projected patient size varies with scanner table height and with the orientation of the localizer (AP versus PA). This study sought to determine if patient size estimates made from localizer scans is affected by variations in fat distribution, specifically when the widest part of the patient is not at the geometric center of the patient. Methods: Lipid gel bolus material was wrapped around an anthropomorphic phantommore » to simulate two different body mass distributions. The first represented a patient with fairly rigid fat and had a generally oval shape. The second was bell-shaped, representing corpulent patients more susceptible to gravity’s lustful tug. Each phantom configuration was imaged using an AP localizer and then a PA localizer. This was repeated at various scanner table heights. The width of the phantom was measured from the localizer and diagnostic images using in-house software. Results: 1) The projected phantom width varied up to 39% as table height changed.2) At some table heights, the width of the phantom, designed to represent larger patients, exceeded the localizer field of view, resulting in an underestimation of the phantom width.3) The oval-shaped phantom approached a normalized phantom width of 1 at a table height several centimeters lower (AP localizer) or higher (PA localizer) than did the bell-shaped phantom. Conclusion: Accurate estimation of patient size from localizer scans is dependent on patient positioning with respect to scanner isocenter and is limited in large patients. Further, patient size is more accurately measured on projection images if the widest part of the patient, rather than the geometric center of the patient, is positioned at scanner isocenter.« less
Pittsburgh Tuskegee Prostate Training Program
2014-05-01
11 . SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT X Approved for public release; distribution...Prostate Specific Membrane Antigen ( Psma ) and Folate Levels on Gene Expression and Proliferation in Prostatic Cancer Cells Denise O’Keefe Table...
Quantiles for Finite Mixtures of Normal Distributions
ERIC Educational Resources Information Center
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
From the Kitchen Table to the Boardroom Table: The Canadian Family and the Work Place.
ERIC Educational Resources Information Center
Vanier Inst. of the Family, Ottawa (Ontario).
This report is intended to increase awareness of the connections between two central spheres of social activity--work and family. Complementing the conventional analyses of the labor force in terms of economic sectors, educational requisites, age and income distribution, the report also presents an analysis of the family circumstances of the…
Local Anesthetic Microencapsulation.
1983-03-18
Polylactide Availability 2 2. Microencapsulation of Etidocaine-HCl 2 3. Porosimetry Measurements 2 B. Stability of Stored Microcapsules 8 C. In Vivo... Microencapsulation 3 Table 2 Etidocaine-HCl Microcapsules Size Distribution 4 Table 3 Porosimetry Data on Etidocaine-HCl Microcapsules (70% Drug, 106...rapid releasing etidocaine microcapsules can provide long term anesthesia (e.g., 4mg of microencapsulated etidocaine-HCl provided anesthesia after five
A Reconceptualization of the Adaptability Rating for Military Aviation
2017-01-01
7 LIST OF TABLES Page Table 1. Flying Adaptability Rating System in 1931...colleagues, and one that satisfies the needs of the system by fostering cooperation with Line leadership. 2 DISTRIBUTION STATEMENT A. Approved for...came the Flying Adaptability Rating. Assigning value to certain variables, it used a grading system in an attempt to predict whether the candidate
Development and Simulation Studies of a Novel Electromagnetics Code
2011-10-20
121 Bibliography 123 LIST OF TABLES xii List of Tables 3.1 The rf photoinjector beam parameters of the BNL 2.856 GHz and the ANL AWA 1.3 GHz guns...examples of field plots. The space-charge fields are numerically computed with the parameters of BNL 2.856 GHz gun. Figure 3.2 shows a 3D plot of Er vs...the BNL 2.856 GHz and the ANL AWA 1.3 GHz guns. The main gun parameters are given in the Table 3.1. The distribution of the bunched beam can be
Determining Normal-Distribution Tolerance Bounds Graphically
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.
Diameter Distributions in Natural Yellow-Poplar Stands
Charles E. McGee; Lino Della-Bianca
1967-01-01
Diameter distributions obtained from 141 pure, natural unthinned yellow-poplar stands in the Appalachian Mountains of Virginia, North Carolina, and Georgia are presented in tables. The distributions are described in relation to stand age, site index, and total number of trees per acre, and are useful for stand management planning.
10 CFR 431.196 - Energy conservation standards and their effective dates.
Code of Federal Regulations, 2014 CFR
2014-01-01
... CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Energy Conservation Standards § 431... Transformers. (1) The efficiency of a low-voltage, dry-type distribution transformer manufactured on or after... rating in the table below. Low-voltage dry-type distribution transformers with kVA ratings not appearing...
Anthropometric and Mass Distribution Characteristics of the Adult Female. Revised
1983-09-01
syst ms, and development of body prostheses. 17. Key Words 18. Distribution Statement Anthropometry , Anatomical Axis, Body Document is available to the...COLLECTION............ . . . ....................... 3 The Subjects ..................................... 3 Anthropometry ...OF TAB&ES Table No. Anthropometry and Mass Distribution Data for the Total Body and Its Segment4: 1 Head
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penfold, S; Miller, A
2015-06-15
Purpose: Stoichiometric calibration of Hounsfield Units (HUs) for conversion to proton relative stopping powers (RStPs) is vital for accurate dose calculation in proton therapy. However proton dose distributions are not only dependent on RStP, but also on relative scattering power (RScP) of patient tissues. RScP is approximated from material density but a stoichiometric calibration of HU-density tables is commonly neglected. The purpose of this work was to quantify the difference in calculated dose of a commercial TPS when using HU-density tables based on tissue substitute materials and stoichiometric calibrated ICRU tissues. Methods: Two HU-density calibration tables were generated based onmore » scans of the CIRS electron density phantom. The first table was based directly on measured HU and manufacturer quoted density of tissue substitute materials. The second was based on the same CT scan of the CIRS phantom followed by a stoichiometric calibration of ICRU44 tissue materials. The research version of Pinnacle{sup 3} proton therapy was used to compute dose in a patient CT data set utilizing both HU-density tables. Results: The two HU-density tables showed significant differences for bone tissues; the difference increasing with increasing HU. Differences in density calibration table translated to a difference in calculated RScP of −2.5% for ICRU skeletal muscle and 9.2% for ICRU femur. Dose-volume histogram analysis of a parallel opposed proton therapy prostate plan showed that the difference in calculated dose was negligible when using the two different HU-density calibration tables. Conclusion: The impact of HU-density calibration technique on proton therapy dose calculation was assessed. While differences were found in the calculated RScP of bony tissues, the difference in dose distribution for realistic treatment scenarios was found to be insignificant.« less
Sarfaraz, Hasan; Paulose, Anoopa; Shenoy, K. Kamalakanth; Hussain, Akhter
2015-01-01
Aims: The aim of the study was to evaluate the stress distribution pattern in the implant and the surrounding bone for a passive and a friction fit implant abutment interface and to analyze the influence of occlusal table dimension on the stress generated. Materials and Methods: CAD models of two different types of implant abutment connections, the passive fit or the slip-fit represented by the Nobel Replace Tri-lobe connection and the friction fit or active fit represented by the Nobel active conical connection were made. The stress distribution pattern was studied at different occlusal dimension. Six models were constructed in PRO-ENGINEER 05 of the two implant abutment connection for three different occlusal dimensions each. The implant and abutment complex was placed in cortical and cancellous bone modeled using a computed tomography scan. This complex was subjected to a force of 100 N in the axial and oblique direction. The amount of stress and the pattern of stress generated were recorded on a color scale using ANSYS 13 software. Results: The results showed that overall maximum Von Misses stress on the bone is significantly less for friction fit than the passive fit in any loading conditions stresses on the implant were significantly higher for the friction fit than the passive fit. The narrow occlusal table models generated the least amount of stress on the implant abutment interface. Conclusion: It can thus be concluded that the conical connection distributes more stress to the implant body and dissipates less stress to the surrounding bone. A narrow occlusal table considerably reduces the occlusal overload. PMID:26929518
Relating quantum privacy and quantum coherence: an operational approach.
Devetak, I; Winter, A
2004-08-20
Given many realizations of a state or a channel as a resource, two parties can generate a secret key as well as entanglement. We describe protocols to perform the secret key distillation (as it turns out, with optimal rate). Then we show how to achieve optimal entanglement generation rates by "coherent" implementation of a class of secret key agreement protocols, proving the long-conjectured "hashing inequality."
Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association
2011-03-24
based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently
Provably secure Rabin-p cryptosystem in hybrid setting
NASA Astrophysics Data System (ADS)
Asbullah, Muhammad Asyraf; Ariffin, Muhammad Rezal Kamel
2016-06-01
In this work, we design an efficient and provably secure hybrid cryptosystem depicted by a combination of the Rabin-p cryptosystem with an appropriate symmetric encryption scheme. We set up a hybrid structure which is proven secure in the sense of indistinguishable against the chosen-ciphertext attack. We presume that the integer factorization problem is hard and the hash function that modeled as a random function.
Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit
2015-09-01
The telecare medicine information system (TMIS) helps the patients to gain the health monitoring facility at home and access medical services over the Internet of mobile networks. Recently, Amin and Biswas presented a smart card based user authentication and key agreement security protocol usable for TMIS system using the cryptographic one-way hash function and biohashing function, and claimed that their scheme is secure against all possible attacks. Though their scheme is efficient due to usage of one-way hash function, we show that their scheme has several security pitfalls and design flaws, such as (1) it fails to protect privileged-insider attack, (2) it fails to protect strong replay attack, (3) it fails to protect strong man-in-the-middle attack, (4) it has design flaw in user registration phase, (5) it has design flaw in login phase, (6) it has design flaw in password change phase, (7) it lacks of supporting biometric update phase, and (8) it has flaws in formal security analysis. In order to withstand these security pitfalls and design flaws, we aim to propose a secure and robust user authenticated key agreement scheme for the hierarchical multi-server environment suitable in TMIS using the cryptographic one-way hash function and fuzzy extractor. Through the rigorous security analysis including the formal security analysis using the widely-accepted Burrows-Abadi-Needham (BAN) logic, the formal security analysis under the random oracle model and the informal security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme using the most-widely accepted and used Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. The simulation results show that our scheme is also secure. Our scheme is more efficient in computation and communication as compared to Amin-Biswas's scheme and other related schemes. In addition, our scheme supports extra functionality features as compared to other related schemes. As a result, our scheme is very appropriate for practical applications in TMIS.
Depth distribution of microbial production and oxidation of methane in northern boreal peatlands.
Sundh, I; Nilsson, M; Granberg, G; Svensson, B H
1994-05-01
The depth distributions of anaerobic microbial methane production and potential aerobic microbial methane oxidation were assessed at several sites in both Sphagnum- and sedge-dominated boreal peatlands in Sweden, and compared with net methane emissions from the same sites. Production and oxidation of methane were measured in peat slurries, and emissions were measured with the closed-chamber technique. Over all eleven sites sampled, production was, on average, highest 12 cm below the depth of the average water table. On the other hand, highest potential oxidation of methane coincided with the depth of the average water table. The integrated production rate in the 0-60 cm interval ranged between 0.05 and 1.7 g CH4 m (-2) day(-) and was negatively correlated with the depth of the average water table (linear regression: r (2) = 0.50, P = 0.015). The depth-integrated potential CH4-oxidation rate ranged between 3.0 and 22.1 g CH4 m(-2) day(-1) and was unrelated to the depth of the average water table. A larger fraction of the methane was oxidized at sites with low average water tables; hence, our results show that low net emission rates in these environments are caused not only by lower methane production rates, but also by conditions more favorable for the development of CH4-oxidizing bacteria in these environments.
Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)
NASA Astrophysics Data System (ADS)
Wilson, A.; Lindholm, D.; Baltzer, T.
2005-12-01
The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2008-01-01
We discuss properties that association coefficients may have in general, e.g., zero value under statistical independence, and we examine coefficients for 2x2 tables with respect to these properties. Furthermore, we study a family of coefficients that are linear transformations of the observed proportion of agreement given the marginal…
Clone tag detection in distributed RFID systems.
Kamaludin, Hazalila; Mahdin, Hairulnizam; Abawajy, Jemal H
2018-01-01
Although Radio Frequency Identification (RFID) is poised to displace barcodes, security vulnerabilities pose serious challenges for global adoption of the RFID technology. Specifically, RFID tags are prone to basic cloning and counterfeiting security attacks. A successful cloning of the RFID tags in many commercial applications can lead to many serious problems such as financial losses, brand damage, safety and health of the public. With many industries such as pharmaceutical and businesses deploying RFID technology with a variety of products, it is important to tackle RFID tag cloning problem and improve the resistance of the RFID systems. To this end, we propose an approach for detecting cloned RFID tags in RFID systems with high detection accuracy and minimal overhead thus overcoming practical challenges in existing approaches. The proposed approach is based on consistency of dual hash collisions and modified count-min sketch vector. We evaluated the proposed approach through extensive experiments and compared it with existing baseline approaches in terms of execution time and detection accuracy under varying RFID tag cloning ratio. The results of the experiments show that the proposed approach outperforms the baseline approaches in cloned RFID tag detection accuracy.
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
21 CFR 1312.28 - Distribution of special controlled substances invoice.
Code of Federal Regulations, 2010 CFR
2010-04-01
... substances export invoice, DEA (or BND) Form 236, will be distributed as follows: (a) Copy 1 shall accompany... Table of DEA Mailing Addresses in § 1321.01 of this chapter for the current mailing address. (e) Copy 5...
21 CFR 1312.28 - Distribution of special controlled substances invoice.
Code of Federal Regulations, 2011 CFR
2011-04-01
... substances export invoice, DEA (or BND) Form 236, will be distributed as follows: (a) Copy 1 shall accompany... Table of DEA Mailing Addresses in § 1321.01 of this chapter for the current mailing address. (e) Copy 5...
MIPS: a database for protein sequences, homology data and yeast genome information.
Mewes, H W; Albermann, K; Heumann, K; Liebl, S; Pfeiffer, F
1997-01-01
The MIPS group (Martinsried Institute for Protein Sequences) at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, collects, processes and distributes protein sequence data within the framework of the tripartite association of the PIR-International Protein Sequence Database (,). MIPS contributes nearly 50% of the data input to the PIR-International Protein Sequence Database. The database is distributed on CD-ROM together with PATCHX, an exhaustive supplement of unique, unverified protein sequences from external sources compiled by MIPS. Through its WWW server (http://www.mips.biochem.mpg.de/ ) MIPS permits internet access to sequence databases, homology data and to yeast genome information. (i) Sequence similarity results from the FASTA program () are stored in the FASTA database for all proteins from PIR-International and PATCHX. The database is dynamically maintained and permits instant access to FASTA results. (ii) Starting with FASTA database queries, proteins have been classified into families and superfamilies (PROT-FAM). (iii) The HPT (hashed position tree) data structure () developed at MIPS is a new approach for rapid sequence and pattern searching. (iv) MIPS provides access to the sequence and annotation of the complete yeast genome (), the functional classification of yeast genes (FunCat) and its graphical display, the 'Genome Browser' (). A CD-ROM based on the JAVA programming language providing dynamic interactive access to the yeast genome and the related protein sequences has been compiled and is available on request. PMID:9016498
2011-08-31
2011 4 . TITLE AND SUBTITLE Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms 5a. CONTRACT...large numbers of hashed passwords (Booz Allen Hamilton, HBGary, Gawker, Sony Playstation , etc.), coupled with the availability of botnets that offer...when evaluating the strength of different password-composition policies. 4 . We investigate the effectiveness of entropy as a measure of password
A System for the Preference Evaluation of Cycle Menus
1974-10-01
Potatoes, Okra, Jellied Fruit Salad, Peach Shortcake 27. Swedish Meatballs , Fried Rice, Eggplant, Lettuce Salad, Pumpkin Pie 28. Corned Beef Hash... Meatballs , Rice, Beets, Jellied Fruit Salad, Marble Cake 81. Fish Sandwich, Fritters, Okra, Lettuce Salad, Chocolate Chip Cookies 82. Baked Macaroil...Radishes, Lettuce Salad, Bread Pudding 88 Cheesf j ", -tashea Potatoes, Cauliflower, M-.*©!.! Pm’r Sol ad, Apricot Pie 89. Meatbal ! Submarine Sandwich
Results of SEI Line-Funded Exploratory New Starts Projects
2012-08-01
that are suspected of being plagiarized ; in order to communicate information, the document must un- ambiguously deliver semantic content to the reader...like for fuzzy hashing techniques to give us the same sense of derivation that plagiarism detection or generative text detection techniques do...more detail than the two academic papers about the concepts used in HDFS and provides an architectural diagram, as shown in Figure 9. Figure 9
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
A comparative study of Message Digest 5(MD5) and SHA256 algorithm
NASA Astrophysics Data System (ADS)
Rachmawati, D.; Tarigan, J. T.; Ginting, A. B. C.
2018-03-01
The document is a collection of written or printed data containing information. The more rapid advancement of technology, the integrity of a document should be kept. Because of the nature of an open document means the document contents can be read and modified by many parties so that the integrity of the information as a content of the document is not preserved. To maintain the integrity of the data, it needs to create a mechanism which is called a digital signature. A digital signature is a specific code which is generated from the function of producing a digital signature. One of the algorithms that used to create the digital signature is a hash function. There are many hash functions. Two of them are message digest 5 (MD5) and SHA256. Those both algorithms certainly have its advantages and disadvantages of each. The purpose of this research is to determine the algorithm which is better. The parameters which used to compare that two algorithms are the running time and complexity. The research results obtained from the complexity of the Algorithms MD5 and SHA256 is the same, i.e., ⊖ (N), but regarding the speed is obtained that MD5 is better compared to SHA256.
Wang, Wei; Wang, Chunqiu; Zhao, Min
2014-03-01
To ease the burdens on the hospitalization capacity, an emerging swallowable-capsule technology has evolved to serve as a remote gastrointestinal (GI) disease examination technique with the aid of the wireless body sensor network (WBSN). Secure multimedia transmission in such a swallowable-capsule-based WBSN faces critical challenges including energy efficiency and content quality guarantee. In this paper, we propose a joint resource allocation and stream authentication scheme to maintain the best possible video quality while ensuring security and energy efficiency in GI-WBSNs. The contribution of this research is twofold. First, we establish a unique signature-hash (S-H) diversity approach in the authentication domain to optimize video authentication robustness and the authentication bit rate overhead over a wireless channel. Based on the full exploration of S-H authentication diversity, we propose a new two-tier signature-hash (TTSH) stream authentication scheme to improve the video quality by reducing authentication dependence overhead while protecting its integrity. Second, we propose to combine this authentication scheme with a unique S-H oriented unequal resource allocation (URA) scheme to improve the energy-distortion-authentication performance of wireless video delivery in GI-WBSN. Our analysis and simulation results demonstrate that the proposed TTSH with URA scheme achieves considerable gain in both authenticated video quality and energy efficiency.
A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function
Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078
A robust and effective smart-card-based remote user authentication mechanism using hash function.
Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.
Das, Ashok Kumar; Goswami, Adrijit
2014-06-01
Recently, Awasthi and Srivastava proposed a novel biometric remote user authentication scheme for the telecare medicine information system (TMIS) with nonce. Their scheme is very efficient as it is based on efficient chaotic one-way hash function and bitwise XOR operations. In this paper, we first analyze Awasthi-Srivastava's scheme and then show that their scheme has several drawbacks: (1) incorrect password change phase, (2) fails to preserve user anonymity property, (3) fails to establish a secret session key beween a legal user and the server, (4) fails to protect strong replay attack, and (5) lacks rigorous formal security analysis. We then a propose a novel and secure biometric-based remote user authentication scheme in order to withstand the security flaw found in Awasthi-Srivastava's scheme and enhance the features required for an idle user authentication scheme. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks, including the replay and man-in-the-middle attacks. Our scheme is also efficient as compared to Awasthi-Srivastava's scheme.
Personnel-General: Army Substance Abuse Program Civilian Services
2001-10-15
destroyed. Additional reproduction and distribution of completed records is prohibited. c. SECTION I. IDENTIFICATION. (1) Block I. Date of Report. Enter...AMPHETAMINE B BARBITUATES C COCAINE H HALLUCINOGENS (LSD) M METHAQUALONE, SEDATIVE, HYPNOTIC , OR ANXIOLYTIC O OPIATES P PHENCYCLIDINE (PCP) T CANNABIS...Table 5–6 Codes for TABLE F (T-DIAG-CODE) Code Rejection Reason 30390 ALCOHOL DEPENDENCE 30400 OPIOID DEPENDENCE 30410 SEDATIVE, HYPNOTIC , OR ANXIOLYTIC
ERIC Educational Resources Information Center
National Center for Health Statistics (DHEW/PHS), Hyattsville, MD.
The report is the second describing the findings of a national survey of manpower in the eye care occupations in 1968 and 1969. It contains primarily 27 statistical tables dealing with specific features of the optometric practice: the number and percentage distribution of optometrists by the number of patient visits per week (tables 1-3); the…
VizieR Online Data Catalog: Be star rotational velocities distribution (Zorec+, 2016)
NASA Astrophysics Data System (ADS)
Zorec, J.; Fremat, Y.; Domiciano de Souza, A.; Royer, F.; Cidale, L.; Hubert, A.-M.; Semaan, T.; Martayan, C.; Cochetti, Y. R.; Arias, M. L.; Aidelman, Y.; Stee, P.
2016-06-01
Table 1 contains apparent fundamental parameters of the 233 Galactic Be stars. For each Be star is given the HD number, the effective temperature, effective surface gravity and bolometric luminosity. They correspond to the parameters of a plan parallel model of stellar atmosphere that fits the energy distribution of the stellar apparent hemisphere rotationally deformed. In Table 1 are also given the color excess E(B-V) and the vsini rotation parameter determined with model atmospheres of rigidly rotating stars. For each parameter is given the 1sigma uncertainty. In the notes are given the authors that produced some reported the data or the methods used to obtain the data. Table 4 contains parent-non-rotating-counterpart fundamental parameters of 233 Be stars: effective temperature, effective surface gravity, bolometric luminosity in solar units, stellar mass in solar units, fractional main-sequence stellar age, pnrc-apparent rotational velocity, critical velocity, ratio of centrifugal-force to gravity in the equator, inclination angle of the rotational axis. (2 data files).
Evidence for the distribution of angular velocity inside the sun and stars
NASA Technical Reports Server (NTRS)
1972-01-01
A round table discussion of problems of solar and stellar spindown and theory is presented. Observational evidence of the angular momentum of the solar wind is included, emphasizing the distribution of angular velocity inside the sun and stars.
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
Spectrophotometer-Based Color Measurements
2017-10-24
public release; distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH , DEVELOPMENT AND ENGINEERING CENTER Weapons and Software Engineering Center...for public release; distribution is unlimited. UNCLASSIFIED i CONTENTS Page Summary 1 Introduction 1 Methods , Assumptions, and Procedures 1...Values for Federal Color Standards 15 Distribution List 25 TABLES 1 Instrument precision 3 2 Method precision and operator variability 4 3
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
... cancellation order granting the requested cancellations. Any distribution, sale, or use of the products subject... December 31, 2012. Any distribution, sale, or use of existing stocks of the products identified in Table 1... sale, distribution and use of existing stocks of manufacturing-use products imported into the United...
ERIC Educational Resources Information Center
Blinn Coll., Brenham, TX.
Blinn College final course grade distributions are summarized for spring 1990 to 1994 in this four-part report. Section I presents tables of final grade distributions by campus and course in accounting; agriculture; anthropology; biology; business; chemistry; child development; communications; computer science; criminal justice; drama; emergency…
NASA Astrophysics Data System (ADS)
Costa, A.; Pioli, L.; Bonadonna, C.
2017-05-01
The authors found a mistake in the formulation of the distribution named Bi-Weibull distribution reported in the equation (A.2) of the Appendix A. The error affects equation (4) (which is the same as eq. (A.2)) and Table 4 in the original manuscript.
ERIC Educational Resources Information Center
Tenopir, Carol; Barry, Jeff
1997-01-01
Profiles 25 database distribution and production companies, all of which responded to a 1997 survey with information on 54 separate online, Web-based, or CD-ROM systems. Highlights increased competition, distribution formats, Web versions versus local area networks, full-text delivery, and pricing policies. Tables present a sampling of customers…
7 CFR 1753.49 - Closeout documents.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Distribution Borrower Contractor 724 Final Inventory—Certificate of Completion 2 1 1 724a Final Inventory... representative of the borrower. (2) Final inventory documents. (i) The borrower shall obtain certifications from... prepare and distribute the final inventory documents in accordance with the tables contained in this...
Code of Federal Regulations, 2014 CFR
2014-04-01
... interest rate and the Mortality Table provided in Rev. Rul. 2001-62 (2001-2 C.B. 632), illustrate the...-quarters age 79 rate. 6 Five percent discounted 18 months (1.05+(−1.5)). 7 Blended age 79/age 80 mortality... mortality tables applicable to such date); and (4) The end point of the period certain, if any, for any...
Code of Federal Regulations, 2013 CFR
2013-04-01
... interest rate and the Mortality Table provided in Rev. Rul. 2001-62 (2001-2 C.B. 632), illustrate the...-quarters age 79 rate. 6 Five percent discounted 18 months (1.05+(−1.5)). 7 Blended age 79/age 80 mortality... mortality tables applicable to such date); and (4) The end point of the period certain, if any, for any...
Code of Federal Regulations, 2012 CFR
2012-04-01
... interest rate and the Mortality Table provided in Rev. Rul. 2001-62 (2001-2 C.B. 632), illustrate the...-quarters age 79 rate. 6 Five percent discounted 18 months (1.05∧(−1.5)). 7 Blended age 79/age 80 mortality... mortality tables applicable to such date); and (4) The end point of the period certain, if any, for any...
Estimating High Tech Army Recruiting Markets
1992-09-01
SCORES: 1963-1988 11 TABLE 4 AVERAGE AMERICAN COLLEGE TESTING ( ACT ) SCORES: 1970-1988 12 TABLE 5 DISTRIBUTION OF THE NLSY SAMPLE BY GENDER AND RACE 21...training, competent leaders, sufficient resources and funds to equip the force 1 . In order to maximize the quality of training, recruiting success is...training policies to current conditions in the ’educational training market 1 . Recruiting success is highly dependent on the nature of the civilian
NASA Astrophysics Data System (ADS)
Hancock, G.; Mattell, N.; Christianson, E.; Wacksman, J.
2004-12-01
Channel incision is a widely observed response to increased flow in urbanized watersheds, but the effects of channel lowering on riparian water tables is not well documented. In a rapidly incising suburban stream in the Virginia Coastal Plain, we hypothesize that incision has lowered floodplain water tables and decreased the overbank flow frequency, and suggest these changes impact vegetation distribution in a diverse, protected riparian habitat. The monitored stream is a tributary to the James River draining 1.3 km2, of which 15% is impervious cover. Incision has occurred largely through upstream migration of a one m high knickpoint at a rate of 1-2 m/yr, primarily during high flow events. We installed 33 wells in six floodplain transects to assess water table elevations beneath the floodplain adjacent to the incising stream. To document the impacts of incision, two transects are located 30 and 50 m upstream of the knickpoint in unincised floodplain, and the remainder are 5, 30, 70, and 100 m downstream of the knickpoint in incised floodplain. In one transect above and two below, pressure transducers attached to dataloggers provide a high-resolution record of water table response to storm events. Significant differences have been observed in the water table above and below the knickpoint. Above the knickpoint, the water table is relatively flat and is 0.2-0.4 m below the floodplain surface. Water table response to precipitation events is nearly immediate, with the water table rising to the floodplain surface in significant rainfall events. In the transect immediately downstream of the knickpoint, the water table possesses a steep gradient, rising from ~1 m below the floodplain at the stream to 0.3 m below the surface within 20 m. In the most downstream transects, the water table is relatively flat, but is one m below the floodplain surface, equivalent to the depth of incision generated by knickpoint passage. Upstream of the knickpoint, overbank flooding occurs frequently, while below the knickpoint the majority of storm flow is contained within the incised channel and occupation of the floodplain is rare. Plant diversity surveys reveal differences in the total density of herbaceous growth and species distribution between the floodplain above and below the knickpoint. Results from >100 plots show that there is more leaf litter, less exposed ground, and a decrease in floodplain species cover in the incised portion of the floodplain. The changes in flood frequency and water table elevation appear to have allowed one invasive species, Japanese stilt grass (Microstegium vimineum), to become dominant in the floodplain understory, displacing native wetland species.
USAF Food Habits Study. Part 4. Selections, Quantities Selected, and Perceived Portion Sizes
1980-07-01
selected fruit, farina, french toast, toast, breads, and corn bread; whites more frequently than blacks selected some egg dishes and some potato ...dishes - hash brown potatoes , french fries, and potato chips. Soups were more frequently selected by blacks. Fried chicken, Mexican foods, pork slices...were mixed for Mexican foods. Further, females more often than males selected mashed potatoes (but not rice or macaroni with cheese), vegetables, and
Field Evaluation of the B Ration in a Hot Weather Environment
1988-07-01
from the B Ration at breakfast (see Figure 7). Reducing the frequency of serving eggs at breakfast would help to lower overall intake of dietary...king .. Beef w/BBQ sauce .. Beef stew Frankfurters Meatballs w/BBQ sauce Ham/chicken loaf Beef w/gravy Beef patties Pork patties...sauce and peas with mushrooms. The B Ration foods disliked the most were the grilled breakfast meat, scrambled eggs , hash brown potatoes, cottage
SHAMROCK: A Synthesizable High Assurance Cryptography and Key Management Coprocessor
2016-11-01
and excluding devices from a communicating group as they become trusted, or untrusted. An example of using rekeying to dynamically adjust group...algorithms, such as the Elliptic Curve Digital Signature Algorithm (ECDSA), work by computing a cryptographic hash of a message using, for example , the...material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721- 05-C
Non-Black-Box Simulation from One-Way Functions and Applications to Resettable Security
2012-11-05
from 2001, Barak (FOCS’01) introduced a novel non-black-box simulation technique. This technique enabled the construc- tion of new cryptographic...primitives, such as resettably-sound zero-knowledge arguments, that cannot be proven secure using just black-box simulation techniques. The work of Barak ... Barak requires the existence of collision-resistant hash functions, and a very recent result by Bitansky and Paneth (FOCS’12) instead requires the
NASA Astrophysics Data System (ADS)
Zhang, Yan; Liu, Hong; Chen, Bin; Zheng, Hongmei; Li, Yating
2014-06-01
Discovering ways in which to increase the sustainability of the metabolic processes involved in urbanization has become an urgent task for urban design and management in China. As cities are analogous to living organisms, the disorders of their metabolic processes can be regarded as the cause of "urban disease". Therefore, identification of these causes through metabolic process analysis and ecological element distribution through the urban ecosystem's compartments will be helpful. By using Beijing as an example, we have compiled monetary input-output tables from 1997, 2000, 2002, 2005, and 2007 and calculated the intensities of the embodied ecological elements to compile the corresponding implied physical input-output tables. We then divided Beijing's economy into 32 compartments and analyzed the direct and indirect ecological intensities embodied in the flows of ecological elements through urban metabolic processes. Based on the combination of input-output tables and ecological network analysis, the description of multiple ecological elements transferred among Beijing's industrial compartments and their distribution has been refined. This hybrid approach can provide a more scientific basis for management of urban resource flows. In addition, the data obtained from distribution characteristics of ecological elements may provide a basic data platform for exploring the metabolic mechanism of Beijing.
Education, Occupation, Hierarchy and Earnings.
ERIC Educational Resources Information Center
Tachibanaki, Toshiaki
1988-01-01
Attempts to estimate a recursive model of earnings distribution with education, occupation, and hierarchy, using individual data for Japanese males. Proves that hierarchical position is very significant in determining earnings level. Compares the influence of education and earnings distribution in Japan and France. Includes 3 tables and 20…
The Missing Middle in Validation Research
ERIC Educational Resources Information Center
Taylor, Erwin K.; Griess, Thomas
1976-01-01
In most selection validation research, only the upper and lower tails of the criterion distribution are used, often yielding misleading or incorrect results. Provides formulas and tables which enable the researcher to account more accurately for the distribution of criterion within the middle range of population. (Author/RW)
Kumar, Deepak; Englesbe, Alexander; Parman, Matthew; ...
2015-11-05
Tabletop reflex discharges in a Penning geometry have many applications including ion sources and eXtreme Ultra-Violet (XUV) sources. The presence of primary electrons accelerated across the cathode sheaths is responsible for the distribution of ion charge states and of the unusually high XUV brightness of these plasmas. Absolutely calibrated space resolved XUV spectra from a table top reflex discharge operating with Al cathodes and Ne gas are presented. The spectra are analyzed with a new and complete model for ion charge distribution in similar reflex discharges. The plasma in the discharge was found to have a density of ~10 18mmore » –3 with a significant fraction >0.01 of fast primary electrons. As a result, the implications of the new model on the ion states achievable in a tabletop reflex plasma discharge are also discussed.« less
Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets
NASA Astrophysics Data System (ADS)
Juric, Mario
2011-01-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
Infrared Analysis of Gasoline/Alcohol Blends.
1981-02-01
in storage, routine handling and distribution. As a result, other oxygenates such as methanol , iso-propanol, t-butanoA, methyl -t- butyl ether, and...Table 1 lists TABLE 1. ALCOHOL ANALYTE BAND NUMBERS -1 Component Analytical Frequency, cm Gasoline 967 Methanol 1030 Ethanol 882 iso-propanol 952 t...of varying concen- trations of each alcohol in a gasoline were obtained, with Figure 4 showing a low and high standard for methanol . The net peak
Use of Probabilistic Topic Models for Search
2009-09-01
Franchise . The metaphor is as follows: There is a number of restaurants ; each has an infinite number of tables. All restaurants serve dishes from a...via moment matching. This gives an approximate distri- bution for the total number of tables in the Chinese Restaurant Franchise as the sum of oc...described by a stick-breaking construction. Second, it can be defined as a distribution over partitions of a measurable space by the Chinese Restaurant
Shallow Water UXO Technology Demonstration Site Scoring Record No. 7
2007-05-01
categories: ferrous, nonferrous , and mixed metals . The ferrous and nonferrous items are further divided into the three weight zones as presented in... nonferrous component and could reasonably be encountered in a range area. The mixed- metals clutter was placed in the open water area only. TABLE 1-3...Table 1-4, and distributed throughout all test areas. Most of this clutter is composed of ordnance components; however, industrial scrap metal and
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1984-01-01
The machine-readable finding list, as it is currently being distributed from the Astronomical Data Center, is described. This version of the list supersedes an earlier one (1977) containing only Sections 1 through 7 of the NSRDS-NBS 3 multiplet tables publications. Additional sections are to be incorporated into this list as they are published.
Health Impact Assessment Impact Characterization Table
The potential health impacts of the proposed decision should be characterized based on the following criteria: Direction, Likelihood, Magnitude, Distribution, Severity, Permanence, Strength of Evidence.
The European Southern Observatory-MIDAS table file system
NASA Technical Reports Server (NTRS)
Peron, M.; Grosbol, P.
1992-01-01
The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.
Water Table Uncertainties due to Uncertainties in Structure and Properties of an Unconfined Aquifer.
Hauser, Juerg; Wellmann, Florian; Trefry, Mike
2018-03-01
We consider two sources of geology-related uncertainty in making predictions of the steady-state water table elevation for an unconfined aquifer. That is the uncertainty in the depth to base of the aquifer and in the hydraulic conductivity distribution within the aquifer. Stochastic approaches to hydrological modeling commonly use geostatistical techniques to account for hydraulic conductivity uncertainty within the aquifer. In the absence of well data allowing derivation of a relationship between geophysical and hydrological parameters, the use of geophysical data is often limited to constraining the structural boundaries. If we recover the base of an unconfined aquifer from an analysis of geophysical data, then the associated uncertainties are a consequence of the geophysical inversion process. In this study, we illustrate this by quantifying water table uncertainties for the unconfined aquifer formed by the paleochannel network around the Kintyre Uranium deposit in Western Australia. The focus of the Bayesian parametric bootstrap approach employed for the inversion of the available airborne electromagnetic data is the recovery of the base of the paleochannel network and the associated uncertainties. This allows us to then quantify the associated influences on the water table in a conceptualized groundwater usage scenario and compare the resulting uncertainties with uncertainties due to an uncertain hydraulic conductivity distribution within the aquifer. Our modeling shows that neither uncertainties in the depth to the base of the aquifer nor hydraulic conductivity uncertainties alone can capture the patterns of uncertainty in the water table that emerge when the two are combined. © 2017, National Ground Water Association.
Relation Extraction with Weak Supervision and Distributional Semantics
2013-05-01
DATES COVERED 00-00-2013 to 00-00-2013 4 . TITLE AND SUBTITLE Relation Extraction with Weak Supervision and Distributional Semantics 5a...ix List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction 1 2 Prior Work 4 ...2.1 Supervised relation extraction . . . . . . . . . . . . . . . . . . . . . 4 2.2 Distant supervision for relation extraction
26 CFR 1.336-0 - Table of contents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 355(d)(2) or (e)(2). (i) Old target—deemed asset disposition. (A) In general. (B) Gains and losses. (1) Gains. (2) Losses. (i) In general. (ii) Stock distributions. (iii) Amount and allocation of disallowed.... (2) Exception. (B) Gains and losses. (1) Gains. (2) Losses. (i) In general. (ii) Stock distributions...
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Kathleen A. Dwire; J. Boone Kauffman; John E. Baham
2006-01-01
The distribution of riparian plant species is largely driven by hydrologic and soil variables, and riparian plant communities frequently occur in relatively distinct zones along streamside elevational and soil textural gradients. In two montane meadows in northeast Oregon, USA, we examined plant species distribution in three riparian plant communities¡ªdefined as wet,...
Minimalistic models of the vertical distribution of roots under stochastic hydrological forcing
NASA Astrophysics Data System (ADS)
Laio, Francesco
2014-05-01
The assessment of the vertical root profile can be useful for multiple purposes: the partition of water fluxes between evaporation and transpiration, the evaluation of root soil reinforcement for bioengineering applications, the influence of roots on biogeochemical and microbial processes in the soil, etc. In water-controlled ecosystems the shape of the root profile is mainly determined by the soil moisture availability at different depths. The long term soil water balance in the root zone can be assessed by modeling the stochastic incoming and outgoing water fluxes, influenced by the stochastic rainfall pulses and/or by the water table fluctuations. Through an ecohydrological analysis one obtains that in water-controlled ecosystems the vertical root distribution is a decreasing function with depth, whose parameters depend on pedologic and climatic factors. The model can be extended to suitably account for the influence of the water table fluctuations, when the water table is shallow enough to exert an influence on root development, in which case the vertical root distribution tends to assume a non-monotonic form. In order to evaluate the validity of the ecohydrological estimation of the root profile we have tested it on a case study in the north of Tuscany (Italy). We have analyzed data from 17 landslide-prone sites: in each of these sites we have assessed the pedologic and climatic descriptors necessary to apply the model, and we have measured the mean rooting depth. The results show a quite good matching between observed and modeled mean root depths. The merit of this minimalistic approach to the modeling of the vertical root distribution relies on the fact that it allows a quantitative estimation of the main features of the vertical root distribution without resorting to time- and money-demanding measuring surveys.
Ordered versus Unordered Map for Primitive Data Types
2015-09-01
mapped to some element. C++ provides two types of map containers within the standard template library, the std ::map and the std ::unordered_map...classes. As the name implies, the containers main functional difference is that the elements in the std ::map are ordered by the key, and the std ...unordered_map are not ordered based on their key. The std ::unordered_map elements are placed into “buckets” based on a hash value computed for their key
A Study of Gaps in Cyber Defense Automation
2016-10-13
converting all characters to lowercase. Next, the normalized file is tokenized using an n -length window. These n -tokens are then hashed into a Bloom filter...expensive and not readily available to most developers. However, with a complexity of O(N2), where N is the number of files (or the number of...prioritized by statistics that measure the impact of each feature on the website’s chances of becoming compromised, and the top N features are submitted to
Unlinkable Serial Transactions: Protocols and Applications
1999-11-01
4) intends for the blind signature to be secure from “one-more” forgery attacks [Pointcheval and Stern 1996]. Such forgery enables the originator to...protocols we describe. The signature indicated by S in message 2 uses the vendor’s signature key for service S and is only used to sign blinded hashes. We...make use of blind signatures ; although, not surprisingly, their sys- tems are rather different from the ones given here [Fujioka et al. 1993; Cranor 1996
Secure Hierarchical Multicast Routing and Multicast Internet Anonymity
1998-06-01
Multimedia, Summer 94, pages 76{79, 94. [15] David Chaum . Blind signatures for untraceable payments. In Proc. Crypto, pages 199{203, 1982. [16] David L...use of digital signatures , which consist of a cryptographic hash of the message encrypted with the private key of the signer. Digitally-signed messages... signature on the request and on the certi cate it contains. Notice that the location service need not retrieve the initiator’s public key as it is contained
NASA Technical Reports Server (NTRS)
Glushko, Vladimir
2004-01-01
Intensity and amplitude of human functional systems and human most important organs are wavelike, rhythmic by nature. These waves have constant periodicity, phase and amplitude. The mentioned characteristics can vary, however their variations have a pronounced reiteration in the course of time. This indicates a hashing of several wave processes and their interference. Stochastic changes in wave processes characteristics of a human organism are explained either by 'pulsations' associated with hashing (superposition) of several wave processes and their interference, or by single influence of environmental physical factors on a human organism. Human beings have respectively periods of higher and lower efficiency, state of health and so on, depending not only of environmental factors, but also of 'internal' rhythmic factor. Sometimes peaks and falls periodicity of some or other characteristics is broken. Disturbance of steady-state biological rhythms is usually accompanied by reduction of activity steadiness of the most important systems of a human organism. In its turn this has an effect on organism's adaptation to changing living conditions as well as on general condition and efficiency of a human being. The latter factor is very important for space medicine. Biological rhythmology is a special branch of biology and medicine, it studies rhythmic activity mechanisms of organs, their systems, individuals and species. Appropriate researches were also carried out in space medicine.
Securizing data linkage in french public statistics.
Guesdon, Maxence; Benzenine, Eric; Gadouche, Kamel; Quantin, Catherine
2016-10-06
Administrative records in France, especially medical and social records, have huge potential for statistical studies. The NIR (a national identifier) is widely used in medico-social administrations, and this would theoretically provide considerable scope for data matching, on condition that the legislation on such matters was respected.The law, however, forbids the processing of non-anonymized medical data, thus making it difficult to carry out studies that require several sources of social and medical data.We would like to benefit from computer techniques introduced since the 70 s to provide safe linkage of anonymized files, to release the current constraints of such procedures.We propose an organization and a data workflow, based on hashing and cyrptographic techniques, to strongly compartmentalize identifying and not-identifying data.The proposed method offers a strong control over who is in possession of which information, using different hashing keys for each linkage. This allows to prevent unauthorized linkage of data, to protect anonymity, by preventing cumulation of not-identifying data which can become identifying data when linked.Our proposal would make it possible to conduct such studies more easily, more regularly and more precisely while preserving a high enough level of anonymity.The main obstacle to setting up such a system, in our opinion, is not technical, but rather organizational in that it is based on the existence of a Key-Management Authority.
Proposed data compression schemes for the Galileo S-band contingency mission
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Tong, Kevin
1993-01-01
The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.
Fluoride content in table salt distributed in Mexico City, Mexico.
Hernández-Guerrero, Juan Carlos; de la Fuente-Hernández, Javier; Jiménez-Farfán, Maria Dolores; Ledesma-Montes, Constantino; Castañeda-Castaneira, Enrique; Molina-Frechero, Nelly; Jacinto-Alemán, Luís Fernando; Juárez-Lopez, Lilia Adriana; Moreno-Altamirano, Alejandra
2008-01-01
The aim of this study was to analyze table salt available in Mexico City's market to identify the fluoride concentrations and to compare these with the Mexican regulations. We analyzed 44 different brands of table salt. All samples were purchased at random in different stores, supermarkets, and groceries from Mexico City's metropolitan area and analyzed in triplicate in three different laboratories (nine determinations per sample) with an Orion 720 A potentiometer and an Orion 9609 BN ion-specific electrode. Fluoride concentration in the samples varied from 0 ppm to 485 ppm. It was found that fluoride concentration varied widely among the analyzed brands. Also, we found that fluoride concentration in 92 percent of the analyzed samples did not match with that printed on the label. Only 6.8 percent of the analyzed samples contained fluoride concentrations that meet Mexican and WHO regulations. The broad variation in the analyzed samples suggests that Mexican Public Health authorities must implement more stringent regulation guidelines and procedures for controlling the distribution of salt and its fluoride concentration for human consumption.
A computer program for multiple decrement life table analyses.
Poole, W K; Cooley, P C
1977-06-01
Life table analysis has traditionally been the tool of choice in analyzing distribution of "survival" times when a parametric form for the survival curve could not be reasonably assumed. Chiang, in two papers [1,2] formalized the theory of life table analyses in a Markov chain framework and derived maximum likelihood estimates of the relevant parameters for the analyses. He also discussed how the techniques could be generalized to consider competing risks and follow-up studies. Although various computer programs exist for doing different types of life table analysis [3] to date, there has not been a generally available, well documented computer program to carry out multiple decrement analyses, either by Chiang's or any other method. This paper describes such a program developed by Research Triangle Institute. A user's manual is available at printing costs which supplements the contents of this paper with a discussion of the formula used in the program listing.
2016-02-01
NFT), plasmonic materials, scattering-type scanning near-field optical microscopy (s-NSOM). I . INTRODUCTION THE continuous growth in data storage is...recording stack for (a) gold and (b) silver bowtie apertures. The spatial distributions are calculated at 1 ns. TABLE I COMPARISON BETWEEN GOLD AND SILVER...NFTs From the calculation results, we can obtain the thermal efficiency defined in (1). A detailed comparison is summarized in Table I , where the
The Use of R and S2 Charts Under Nonstandard Conditions.
1980-03-01
and Krishnaiah (1964), David (1956), Finney (1941), Ghosh (1955), 5 Gupta (1963), Nair (1948), and Ramachandran (1956). Under Gaussian assumptions...the extensive tables of Armitage and Krishnaiah (1964) are tabulations of upper percentiles limited to the range 1 < t < 12 and thus are not suited to...REFERENCES Armitage, J.V. and Krishnaiah , P.R. (1964). "Tables for the Studentized largest chi-square distribution and their applications." Report ARL 64- 188
2016-12-01
the study for the presence or absence of ectopic bone formation at the indicated time points post injury (Table 1.). 8 Table 1. Incidence of HO...1 Award Number: W81XWH-12-2-0119 TITLE: Early Diagnosis and Intervention Strategies for Post -Traumatic Heterotopic Ossification in Severely...2016 TYPE OF REPORT: Final PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT
NASA Astrophysics Data System (ADS)
Shaya, E.; Kargatis, V.; Blackwell, J.; Borne, K.; White, R. A.; Cheung, C.
1998-05-01
Several new web based services have been introduced this year by the Astrophysics Data Facility (ADF) at the NASA Goddard Space Flight Center. IMPReSS is a graphical interface to astrophysics databases that presents the user with the footprints of observations of space-based missions. It also aids astronomers in retrieving these data by sending requests to distributed data archives. The VIEWER is a reader of ADC astronomical catalogs and journal tables that allows subsetting of catalogs by column choices and range selection and provides database-like search capability within each table. With it, the user can easily find the table data most appropriate for their purposes and then download either the subset table or the original table. CATSEYE is a tool that plots output tables from the VIEWER (and soon AMASE), making exploring the datasets fast and easy. Having completed the basic functionality of these systems, we are enhancing the site to provide advanced functionality. These will include: market basket storage of tables and records of VIEWER output for IMPReSS and AstroBrowse queries, non-HTML table responses to AstroBrowse type queries, general column arithmetic, modularity to allow entrance into the sequence of web pages at any point, histogram plots, navigable maps, and overplotting of catalog objects on mission footprint maps. When completed, the ADF/ADC web facilities will provide astronomical tabled data and mission retrieval information in several hyperlinked environments geared for users at any level, from the school student to the typical astronomer to the expert datamining tools at state-of-the-art data centers.
Attacks on quantum key distribution protocols that employ non-ITS authentication
NASA Astrophysics Data System (ADS)
Pacher, C.; Abidin, A.; Lorünser, T.; Peev, M.; Ursin, R.; Zeilinger, A.; Larsson, J.-Å.
2016-01-01
We demonstrate how adversaries with large computing resources can break quantum key distribution (QKD) protocols which employ a particular message authentication code suggested previously. This authentication code, featuring low key consumption, is not information-theoretically secure (ITS) since for each message the eavesdropper has intercepted she is able to send a different message from a set of messages that she can calculate by finding collisions of a cryptographic hash function. However, when this authentication code was introduced, it was shown to prevent straightforward man-in-the-middle (MITM) attacks against QKD protocols. In this paper, we prove that the set of messages that collide with any given message under this authentication code contains with high probability a message that has small Hamming distance to any other given message. Based on this fact, we present extended MITM attacks against different versions of BB84 QKD protocols using the addressed authentication code; for three protocols, we describe every single action taken by the adversary. For all protocols, the adversary can obtain complete knowledge of the key, and for most protocols her success probability in doing so approaches unity. Since the attacks work against all authentication methods which allow to calculate colliding messages, the underlying building blocks of the presented attacks expose the potential pitfalls arising as a consequence of non-ITS authentication in QKD post-processing. We propose countermeasures, increasing the eavesdroppers demand for computational power, and also prove necessary and sufficient conditions for upgrading the discussed authentication code to the ITS level.
Botbol, Joseph Moses; Evenden, Gerald Ian
1989-01-01
Tables, graphs, and maps are used to portray the frequency characteristics and spatial distribution of manganese oxide-rich phase geochemical data, to characterize the northern Pacific in terms of publicly available nodule geochemical data, and to develop data portrayal methods that will facilitate data analysis. Source data are a subset of the Scripps Institute of Oceanography's Sediment Data Bank. The study area is bounded by 0° N., 40° N., 120° E., and 100° W. and is arbitrarily subdivided into 14-20°x20° geographic subregions. Frequency distributions of trace metals characterized in the original raw data are graphed as ogives, and salient parameters are tabulated. All variables are transformed to enrichment values relative to median concentration within their host subregions. Scatter plots of all pairs of original variables and their enrichment transforms are provided as an aid to the interpretation of correlations between variables. Gridded spatial distributions of all variables are portrayed as gray-scale maps. The use of tables and graphs to portray frequency statistics and gray-scale maps to portray spatial distributions is an effective way to prepare for and facilitate multivariate data analysis.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Study on the security of the authentication scheme with key recycling in QKD
NASA Astrophysics Data System (ADS)
Li, Qiong; Zhao, Qiang; Le, Dan; Niu, Xiamu
2016-09-01
In quantum key distribution (QKD), the information theoretically secure authentication is necessary to guarantee the integrity and authenticity of the exchanged information over the classical channel. In order to reduce the key consumption, the authentication scheme with key recycling (KR), in which a secret but fixed hash function is used for multiple messages while each tag is encrypted with a one-time pad (OTP), is preferred in QKD. Based on the assumption that the OTP key is perfect, the security of the authentication scheme has be proved. However, the OTP key of authentication in a practical QKD system is not perfect. How the imperfect OTP affects the security of authentication scheme with KR is analyzed thoroughly in this paper. In a practical QKD, the information of the OTP key resulting from QKD is partially leaked to the adversary. Although the information leakage is usually so little to be neglected, it will lead to the increasing degraded security of the authentication scheme as the system runs continuously. Both our theoretical analysis and simulation results demonstrate that the security level of authentication scheme with KR, mainly indicated by its substitution probability, degrades exponentially in the number of rounds and gradually diminishes to zero.
Clone tag detection in distributed RFID systems
Kamaludin, Hazalila; Mahdin, Hairulnizam
2018-01-01
Although Radio Frequency Identification (RFID) is poised to displace barcodes, security vulnerabilities pose serious challenges for global adoption of the RFID technology. Specifically, RFID tags are prone to basic cloning and counterfeiting security attacks. A successful cloning of the RFID tags in many commercial applications can lead to many serious problems such as financial losses, brand damage, safety and health of the public. With many industries such as pharmaceutical and businesses deploying RFID technology with a variety of products, it is important to tackle RFID tag cloning problem and improve the resistance of the RFID systems. To this end, we propose an approach for detecting cloned RFID tags in RFID systems with high detection accuracy and minimal overhead thus overcoming practical challenges in existing approaches. The proposed approach is based on consistency of dual hash collisions and modified count-min sketch vector. We evaluated the proposed approach through extensive experiments and compared it with existing baseline approaches in terms of execution time and detection accuracy under varying RFID tag cloning ratio. The results of the experiments show that the proposed approach outperforms the baseline approaches in cloned RFID tag detection accuracy. PMID:29565982
Lightweight and confidential data discovery and dissemination for wireless body area networks.
He, Daojing; Chan, Sammy; Zhang, Yan; Yang, Haomiao
2014-03-01
As a special sensor network, a wireless body area network (WBAN) provides an economical solution to real-time monitoring and reporting of patients' physiological data. After a WBAN is deployed, it is sometimes necessary to disseminate data into the network through wireless links to adjust configuration parameters of body sensors or distribute management commands and queries to sensors. A number of such protocols have been proposed recently, but they all focus on how to ensure reliability and overlook security vulnerabilities. Taking into account the unique features and application requirements of a WBAN, this paper presents the design, implementation, and evaluation of a secure, lightweight, confidential, and denial-of-service-resistant data discovery and dissemination protocol for WBANs to ensure the data items disseminated are not altered or tampered. Based on multiple one-way key hash chains, our protocol provides instantaneous authentication and can tolerate node compromise. Besides the theoretical analysis that demonstrates the security and performance of the proposed protocol, this paper also reports the experimental evaluation of our protocol in a network of resource-limited sensor nodes, which shows its efficiency in practice. In particular, extensive security analysis shows that our protocol is provably secure.
Improving GLOBALlAND30 Artificial Type Extraction Accuracy in Low-Density Residents
NASA Astrophysics Data System (ADS)
Hou, Lili; Zhu, Ling; Peng, Shu; Xie, Zhenlei; Chen, Xu
2016-06-01
GlobalLand 30 is the first 30m resolution land cover product in the world. It covers the area within 80°N and 80°S. There are ten classes including artificial cover, water bodies, woodland, lawn, bare land, cultivated land, wetland, sea area, shrub and snow,. The TM imagery from Landsat is the main data source of GlobalLand 30. In the artificial surface type, one of the omission error happened on low-density residents' part. In TM images, hash distribution is one of the typical characteristics of the low-density residents, and another one is there are a lot of cultivated lands surrounded the low-density residents. Thus made the low-density residents part being blurred with cultivated land. In order to solve this problem, nighttime light remote sensing image is used as a referenced data, and on the basis of NDBI, we add TM6 to calculate the amount of surface thermal radiation index TR-NDBI (Thermal Radiation Normalized Difference Building Index) to achieve the purpose of extracting low-density residents. The result shows that using TR-NDBI and the nighttime light remote sensing image are a feasible and effective method for extracting low-density residents' areas.