Collision analysis of one kind of chaos-based hash function
NASA Astrophysics Data System (ADS)
Xiao, Di; Peng, Wenbing; Liao, Xiaofeng; Xiang, Tao
2010-02-01
In the last decade, various chaos-based hash functions have been proposed. Nevertheless, the corresponding analyses of them lag far behind. In this Letter, we firstly take a chaos-based hash function proposed very recently in Amin, Faragallah and Abd El-Latif (2009) [11] as a sample to analyze its computational collision problem, and then generalize the construction method of one kind of chaos-based hash function and summarize some attentions to avoid the collision problem. It is beneficial to the hash function design based on chaos in the future.
Perceptual Audio Hashing Functions
NASA Astrophysics Data System (ADS)
Özer, Hamza; Sankur, Bülent; Memon, Nasir; Anarım, Emin
2005-12-01
Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
Hash function based on chaotic map lattices.
Wang, Shihong; Hu, Gang
2007-06-01
A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.
Hash function based on chaotic map lattices
NASA Astrophysics Data System (ADS)
Wang, Shihong; Hu, Gang
2007-06-01
A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.
A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps
NASA Astrophysics Data System (ADS)
Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.
2017-06-01
The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.
Linear Subspace Ranking Hashing for Cross-Modal Retrieval.
Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A
2017-09-01
Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.
Fully Integrated Passive UHF RFID Tag for Hash-Based Mutual Authentication Protocol.
Mikami, Shugo; Watanabe, Dai; Li, Yang; Sakiyama, Kazuo
2015-01-01
Passive radio-frequency identification (RFID) tag has been used in many applications. While the RFID market is expected to grow, concerns about security and privacy of the RFID tag should be overcome for the future use. To overcome these issues, privacy-preserving authentication protocols based on cryptographic algorithms have been designed. However, to the best of our knowledge, evaluation of the whole tag, which includes an antenna, an analog front end, and a digital processing block, that runs authentication protocols has not been studied. In this paper, we present an implementation and evaluation of a fully integrated passive UHF RFID tag that runs a privacy-preserving mutual authentication protocol based on a hash function. We design a single chip including the analog front end and the digital processing block. We select a lightweight hash function supporting 80-bit security strength and a standard hash function supporting 128-bit security strength. We show that when the lightweight hash function is used, the tag completes the protocol with a reader-tag distance of 10 cm. Similarly, when the standard hash function is used, the tag completes the protocol with the distance of 8.5 cm. We discuss the impact of the peak power consumption of the tag on the distance of the tag due to the hash function.
Fully Integrated Passive UHF RFID Tag for Hash-Based Mutual Authentication Protocol
Mikami, Shugo; Watanabe, Dai; Li, Yang; Sakiyama, Kazuo
2015-01-01
Passive radio-frequency identification (RFID) tag has been used in many applications. While the RFID market is expected to grow, concerns about security and privacy of the RFID tag should be overcome for the future use. To overcome these issues, privacy-preserving authentication protocols based on cryptographic algorithms have been designed. However, to the best of our knowledge, evaluation of the whole tag, which includes an antenna, an analog front end, and a digital processing block, that runs authentication protocols has not been studied. In this paper, we present an implementation and evaluation of a fully integrated passive UHF RFID tag that runs a privacy-preserving mutual authentication protocol based on a hash function. We design a single chip including the analog front end and the digital processing block. We select a lightweight hash function supporting 80-bit security strength and a standard hash function supporting 128-bit security strength. We show that when the lightweight hash function is used, the tag completes the protocol with a reader-tag distance of 10 cm. Similarly, when the standard hash function is used, the tag completes the protocol with the distance of 8.5 cm. We discuss the impact of the peak power consumption of the tag on the distance of the tag due to the hash function. PMID:26491714
A more secure parallel keyed hash function based on chaotic neural network
NASA Astrophysics Data System (ADS)
Huang, Zhongquan
2011-08-01
Although various hash functions based on chaos or chaotic neural network were proposed, most of them can not work efficiently in parallel computing environment. Recently, an algorithm for parallel keyed hash function construction based on chaotic neural network was proposed [13]. However, there is a strict limitation in this scheme that its secret keys must be nonce numbers. In other words, if the keys are used more than once in this scheme, there will be some potential security flaw. In this paper, we analyze the cause of vulnerability of the original one in detail, and then propose the corresponding enhancement measures, which can remove the limitation on the secret keys. Theoretical analysis and computer simulation indicate that the modified hash function is more secure and practical than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function, such as good statistical properties, high message and key sensitivity, and strong collision resistance, etc.
A cryptographic hash function based on chaotic network automata
NASA Astrophysics Data System (ADS)
Machicao, Jeaneth; Bruno, Odemir M.
2017-12-01
Chaos theory has been used to develop several cryptographic methods relying on the pseudo-random properties extracted from simple nonlinear systems such as cellular automata (CA). Cryptographic hash functions (CHF) are commonly used to check data integrity. CHF “compress” arbitrary long messages (input) into much smaller representations called hash values or message digest (output), designed to prevent the ability to reverse the hash values into the original message. This paper proposes a chaos-based CHF inspired on an encryption method based on chaotic CA rule B1357-S2468. Here, we propose an hybrid model that combines CA and networks, called network automata (CNA), whose chaotic spatio-temporal outputs are used to compute a hash value. Following the Merkle and Damgård model of construction, a portion of the message is entered as the initial condition of the network automata, so that the rest parts of messages are iteratively entered to perturb the system. The chaotic network automata shuffles the message using flexible control parameters, so that the generated hash value is highly sensitive to the message. As demonstrated in our experiments, the proposed model has excellent pseudo-randomness and sensitivity properties with acceptable performance when compared to conventional hash functions.
Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.
Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian
2018-02-23
Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.
Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.
Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen
2015-09-01
With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.
Collision attack against Tav-128 hash function
NASA Astrophysics Data System (ADS)
Hariyanto, Fajar; Hayat Susanti, Bety
2017-10-01
Tav-128 is a hash function which is designed for Radio Frequency Identification (RFID) authentication protocol. Tav-128 is expected to be a cryptographically secure hash function which meets collision resistance properties. In this research, a collision attack is done to prove whether Tav-128 is a collision resistant hash function. The results show that collisions can be obtained in Tav-128 hash function which means in other word, Tav-128 is not a collision resistant hash function.
Intrusion detection using secure signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Trent Darnel; Haile, Jedediah
A method and device for intrusion detection using secure signatures comprising capturing network data. A search hash value, value employing at least one one-way function, is generated from the captured network data using a first hash function. The presence of a search hash value match in a secure signature table comprising search hash values and an encrypted rule is determined. After determining a search hash value match, a decryption key is generated from the captured network data using a second hash function, a hash function different form the first hash function. One or more of the encrypted rules of themore » secure signatures table having a hash value equal to the generated search hash value are then decrypted using the generated decryption key. The one or more decrypted secure signature rules are then processed for a match and one or more user notifications are deployed if a match is identified.« less
Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.
Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong
2017-05-01
Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.
Efficient computation of hashes
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.
2014-06-01
The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.
Text image authenticating algorithm based on MD5-hash function and Henon map
NASA Astrophysics Data System (ADS)
Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue
2017-07-01
In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images
Robust hashing with local models for approximate similarity search.
Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang
2014-07-01
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
Multiview alignment hashing for efficient image search.
Liu, Li; Yu, Mengyang; Shao, Ling
2015-03-01
Hashing is a popular and efficient method for nearest neighbor search in large-scale data spaces by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. For most hashing methods, the performance of retrieval heavily depends on the choice of the high-dimensional feature descriptor. Furthermore, a single type of feature cannot be descriptive enough for different images when it is used for hashing. Thus, how to combine multiple representations for learning effective hashing functions is an imminent task. In this paper, we present a novel unsupervised multiview alignment hashing approach based on regularized kernel nonnegative matrix factorization, which can find a compact representation uncovering the hidden semantics and simultaneously respecting the joint probability distribution of data. In particular, we aim to seek a matrix factorization to effectively fuse the multiple information sources meanwhile discarding the feature redundancy. Since the raised problem is regarded as nonconvex and discrete, our objective function is then optimized via an alternate way with relaxation and converges to a locally optimal solution. After finding the low-dimensional representation, the hashing functions are finally obtained through multivariable logistic regression. The proposed method is systematically evaluated on three data sets: 1) Caltech-256; 2) CIFAR-10; and 3) CIFAR-20, and the results show that our method significantly outperforms the state-of-the-art multiview hashing techniques.
On the concept of cryptographic quantum hashing
NASA Astrophysics Data System (ADS)
Ablayev, F.; Ablayev, M.
2015-12-01
In the letter we define the notion of a quantum resistant ((ε ,δ ) -resistant) hash function which consists of a combination of pre-image (one-way) resistance (ε-resistance) and collision resistance (δ-resistance) properties. We present examples and discussion that supports the idea of quantum hashing. We present an explicit quantum hash function which is ‘balanced’, one-way resistant and collision resistant and demonstrate how to build a large family of quantum hash functions. Balanced quantum hash functions need a high degree of entanglement between the qubits. We use a phase transformation technique to express quantum hashing constructions, which is an effective way of mapping hash states to coherent states in a superposition of time-bin modes. The phase transformation technique is ready to be implemented with current optical technology.
NASA Astrophysics Data System (ADS)
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-29
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Image encryption algorithm based on multiple mixed hash functions and cyclic shift
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhu, Xiaoqiang; Wu, Xiangjun; Zhang, Yingqian
2018-08-01
This paper proposes a new one-time pad scheme for chaotic image encryption that is based on the multiple mixed hash functions and the cyclic-shift function. The initial value is generated using both information of the plaintext image and the chaotic sequences, which are calculated from the SHA1 and MD5 hash algorithms. The scrambling sequences are generated by the nonlinear equations and logistic map. This paper aims to improve the deficiencies of traditional Baptista algorithms and its improved algorithms. We employ the cyclic-shift function and piece-wise linear chaotic maps (PWLCM), which give each shift number the characteristics of chaos, to diffuse the image. Experimental results and security analysis show that the new scheme has better security and can resist common attacks.
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
Computing quantum hashing in the model of quantum branching programs
NASA Astrophysics Data System (ADS)
Ablayev, Farid; Ablayev, Marat; Vasiliev, Alexander
2018-02-01
We investigate the branching program complexity of quantum hashing. We consider a quantum hash function that maps elements of a finite field into quantum states. We require that this function is preimage-resistant and collision-resistant. We consider two complexity measures for Quantum Branching Programs (QBP): a number of qubits and a number of compu-tational steps. We show that the quantum hash function can be computed efficiently. Moreover, we prove that such QBP construction is optimal. That is, we prove lower bounds that match the constructed quantum hash function computation.
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
Secure Image Hash Comparison for Warhead Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul J.; Jarman, Kenneth D.; Robinson, Sean M.
2014-06-06
The effort to inspect and verify warheads in the context of possible future arms control treaties is rife with security and implementation issues. In this paper we review prior work on perceptual image hashing for template-based warhead verification. Furthermore, we formalize the notion of perceptual hashes and demonstrate that large classes of such functions are likely not cryptographically secure. We close with a brief discussion of fully homomorphic encryption as an alternative technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draelos, Timothy John; Dautenhahn, Nathan; Schroeppel, Richard Crabtree
The security of the widely-used cryptographic hash function SHA1 has been impugned. We have developed two replacement hash functions. The first, SHA1X, is a drop-in replacement for SHA1. The second, SANDstorm, has been submitted as a candidate to the NIST-sponsored SHA3 Hash Function competition.
Implementation of cryptographic hash function SHA256 in C++
NASA Astrophysics Data System (ADS)
Shrivastava, Akash
2012-02-01
This abstract explains the implementation of SHA Secure hash algorithm 256 using C++. The SHA-2 is a strong hashing algorithm used in almost all kinds of security applications. The algorithm consists of 2 phases: Preprocessing and hash computation. Preprocessing involves padding a message, parsing the padded message into m-bits blocks, and setting initialization values to be used in the hash computation. It generates a message schedule from padded message and uses that schedule, along with functions, constants, and word operations to iteratively generate a series of hash values. The final hash value generated by the computation is used to determine the message digest. SHA-2 includes a significant number of changes from its predecessor, SHA-1. SHA-2 consists of a set of four hash functions with digests that are 224, 256, 384 or 512 bits. The algorithm outputs a 256 bits message block with an internal state block of 256 bits and initial block size of 512 bits. Maximum message length in bit is generated is 2^64 -1, over all computed over a series of 64 rounds consisting or several operations such as and, or, Xor, Shr, Rot. The code will provide clear understanding of the hash algorithm and generates hash values to retrieve message digest.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
Building Application-Related Patient Identifiers: What Solution for a European Country?
Quantin, Catherine; Allaert, François-André; Avillach, Paul; Fassa, Maniane; Riandey, Benoît; Trouessin, Gilles; Cohen, Olivier
2008-01-01
We propose a method utilizing a derived social security number with the same reliability as the social security number. We show the anonymity techniques classically based on unidirectional hash functions (such as the secure hash algorithm (SHA-2) function that can guarantee the security, quality, and reliability of information if these techniques are applied to the Social Security Number). Hashing produces a strictly anonymous code that is always the same for a given individual, and thus enables patient data to be linked. Different solutions are developed and proposed in this article. Hashing the social security number will make it possible to link the information in the personal medical file to other national health information sources with the aim of completing or validating the personal medical record or conducting epidemiological and clinical research. This data linkage would meet the anonymous data requirements of the European directive on data protection. PMID:18401447
On the balanced quantum hashing
NASA Astrophysics Data System (ADS)
Ablayev, F.; Ablayev, M.; Vasiliev, A.
2016-02-01
In the paper we define a notion of a resistant quantum hash function which combines a notion of pre-image (one-way) resistance and the notion of collision resistance. In the quantum setting one-way resistance property and collision resistance property are correlated: the “more” a quantum function is one-way resistant the “less” it is collision resistant and vice versa. We present an explicit quantum hash function which is “balanced” one-way resistant and collision resistant and demonstrate how to build a large family of balanced quantum hash functions.
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.
Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng
2016-10-01
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
Quantin, C; Fassa, M; Coatrieux, G; Riandey, B; Trouessin, G; Allaert, F A
2009-02-01
Compiling individual records which come from different sources remains very important for multicenter epidemiological studies, but at the same time European directives or other national legislation concerning nominal data processing have to be respected. These legal aspects can be satisfied by implementing mechanisms that allow anonymization of patient data (such as hashing techniques). Moreover, for security reasons, official recommendations suggest using different cryptographic keys in combination with a cryptographic hash function for each study. Unfortunately, such an anonymization procedure is in contradiction with the common requirement in public health and biomedical research as it becomes almost impossible to link records from separate data collections where the same entity is not referenced in the same way. Solving this paradox by using methodology based on the combination of hashing and enciphering techniques is the main aim of this article. The method relies on one of the best known hashing functions (the secure hash algorithm) to ensure the anonymity of personal information while providing greater resistance to dictionary attacks, combined with encryption techniques. The originality of the method relies on the way the combination of hashing and enciphering techniques is performed: like in asymmetric encryption, two keys are used but the private key depends on the patient's identity. The combination of hashing and enciphering techniques provides a great improvement in the overall security of the proposed scheme. This methodology makes the stored data available for use in the field of public health for the benefit of patients, while respecting legal security requirements.
SGFSC: speeding the gene functional similarity calculation based on hash tables.
Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia
2016-11-04
In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .
Fast perceptual image hash based on cascade algorithm
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya
2017-09-01
In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.
Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network
NASA Astrophysics Data System (ADS)
Oda, Akihiro; Nishi, Hiroaki
Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.
Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo
2016-10-01
Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.
Feature hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui
2018-03-01
Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.
The Amordad database engine for metagenomics.
Behnam, Ehsan; Smith, Andrew D
2014-10-15
Several technical challenges in metagenomic data analysis, including assembling metagenomic sequence data or identifying operational taxonomic units, are both significant and well known. These forms of analysis are increasingly cited as conceptually flawed, given the extreme variation within traditionally defined species and rampant horizontal gene transfer. Furthermore, computational requirements of such analysis have hindered content-based organization of metagenomic data at large scale. In this article, we introduce the Amordad database engine for alignment-free, content-based indexing of metagenomic datasets. Amordad places the metagenome comparison problem in a geometric context, and uses an indexing strategy that combines random hashing with a regular nearest neighbor graph. This framework allows refinement of the database over time by continual application of random hash functions, with the effect of each hash function encoded in the nearest neighbor graph. This eliminates the need to explicitly maintain the hash functions in order for query efficiency to benefit from the accumulated randomness. Results on real and simulated data show that Amordad can support logarithmic query time for identifying similar metagenomes even as the database size reaches into the millions. Source code, licensed under the GNU general public license (version 3) is freely available for download from http://smithlabresearch.org/amordad andrewds@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng
2015-11-01
To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.
2003-01-20
This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5),more » and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.« less
Model-based vision using geometric hashing
NASA Astrophysics Data System (ADS)
Akerman, Alexander, III; Patton, Ronald
1991-04-01
The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.
A Study on the Secure User Profiling Structure and Procedure for Home Healthcare Systems.
Ko, Hoon; Song, MoonBae
2016-01-01
Despite of various benefits such as a convenience and efficiency, home healthcare systems have some inherent security risks that may cause a serious leak on personal health information. This work presents a Secure User Profiling Structure which has the patient information including their health information. A patient and a hospital keep it at that same time, they share the updated data. While they share the data and communicate, the data can be leaked. To solve the security problems, a secure communication channel with a hash function and an One-Time Password between a client and a hospital should be established and to generate an input value to an OTP, it uses a dual hash-function. This work presents a dual hash function-based approach to generate the One-Time Password ensuring a secure communication channel with the secured key. In result, attackers are unable to decrypt the leaked information because of the secured key; in addition, the proposed method outperforms the existing methods in terms of computation cost.
Implementation of 4-way Superscalar Hash MIPS Processor Using FPGA
NASA Astrophysics Data System (ADS)
Sahib Omran, Safaa; Fouad Jumma, Laith
2018-05-01
Due to the quick advancements in the personal communications systems and wireless communications, giving data security has turned into a more essential subject. This security idea turns into a more confounded subject when next-generation system requirements and constant calculation speed are considered in real-time. Hash functions are among the most essential cryptographic primitives and utilized as a part of the many fields of signature authentication and communication integrity. These functions are utilized to acquire a settled size unique fingerprint or hash value of an arbitrary length of message. In this paper, Secure Hash Algorithms (SHA) of types SHA-1, SHA-2 (SHA-224, SHA-256) and SHA-3 (BLAKE) are implemented on Field-Programmable Gate Array (FPGA) in a processor structure. The design is described and implemented using a hardware description language, namely VHSIC “Very High Speed Integrated Circuit” Hardware Description Language (VHDL). Since the logical operation of the hash types of (SHA-1, SHA-224, SHA-256 and SHA-3) are 32-bits, so a Superscalar Hash Microprocessor without Interlocked Pipelines (MIPS) processor are designed with only few instructions that were required in invoking the desired Hash algorithms, when the four types of hash algorithms executed sequentially using the designed processor, the total time required equal to approximately 342 us, with a throughput of 4.8 Mbps while the required to execute the same four hash algorithms using the designed four-way superscalar is reduced to 237 us with improved the throughput to 5.1 Mbps.
Study of the similarity function in Indexing-First-One hashing
NASA Astrophysics Data System (ADS)
Lai, Y.-L.; Jin, Z.; Goi, B.-M.; Chai, T.-Y.
2017-06-01
The recent proposed Indexing-First-One (IFO) hashing is a latest technique that is particularly adopted for eye iris template protection, i.e. IrisCode. However, IFO employs the measure of Jaccard Similarity (JS) initiated from Min-hashing has yet been adequately discussed. In this paper, we explore the nature of JS in binary domain and further propose a mathematical formulation to generalize the usage of JS, which is subsequently verified by using CASIA v3-Interval iris database. Our study reveals that JS applied in IFO hashing is a generalized version in measure two input objects with respect to Min-Hashing where the coefficient of JS is equal to one. With this understanding, IFO hashing can propagate the useful properties of Min-hashing, i.e. similarity preservation, thus favorable for similarity searching or recognition in binary space.
Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen
2018-09-01
We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.
Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling
2018-04-01
In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
NASA Astrophysics Data System (ADS)
Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo
2017-10-01
This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
Achaete-Scute Homolog 1 Expression Controls Cellular Differentiation of Neuroblastoma
Kasim, Mumtaz; Heß, Vicky; Scholz, Holger; Persson, Pontus B.; Fähling, Michael
2016-01-01
Neuroblastoma, the major cause of infant cancer deaths, results from fast proliferation of undifferentiated neuroblasts. Treatment of high-risk neuroblastoma includes differentiation with retinoic acid (RA); however, the resistance of many of these tumors to RA-induced differentiation poses a considerable challenge. Human achaete-scute homolog 1 (hASH1) is a proneural basic helix-loop-helix transcription factor essential for neurogenesis and is often upregulated in neuroblastoma. Here, we identified a novel function for hASH1 in regulating the differentiation phenotype of neuroblastoma cells. Global analysis of 986 human neuroblastoma datasets revealed a negative correlation between hASH1 and neuron differentiation that was independent of the N-myc (MYCN) oncogene. Using RA to induce neuron differentiation in two neuroblastoma cell lines displaying high and low levels of hASH1 expression, we confirmed the link between hASH1 expression and the differentiation defective phenotype, which was reversed by silencing hASH1 or by hypoxic preconditioning. We further show that hASH1 suppresses neuronal differentiation by inhibiting transcription at the RA receptor element. Collectively, our data indicate hASH1 to be key for understanding neuroblastoma resistance to differentiation therapy and pave the way for hASH1-targeted therapies for augmenting the response of neuroblastoma to differentiation therapy. PMID:28066180
Image Hashes as Templates for Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janik, Tadeusz; Jarman, Kenneth D.; Robinson, Sean M.
2012-07-17
Imaging systems can provide measurements that confidently assess characteristics of nuclear weapons and dismantled weapon components, and such assessment will be needed in future verification for arms control. Yet imaging is often viewed as too intrusive, raising concern about the ability to protect sensitive information. In particular, the prospect of using image-based templates for verifying the presence or absence of a warhead, or of the declared configuration of fissile material in storage, may be rejected out-of-hand as being too vulnerable to violation of information barrier (IB) principles. Development of a rigorous approach for generating and comparing reduced-information templates from images,more » and assessing the security, sensitivity, and robustness of verification using such templates, are needed to address these concerns. We discuss our efforts to develop such a rigorous approach based on a combination of image-feature extraction and encryption-utilizing hash functions to confirm proffered declarations, providing strong classified data security while maintaining high confidence for verification. The proposed work is focused on developing secure, robust, tamper-sensitive and automatic techniques that may enable the comparison of non-sensitive hashed image data outside an IB. It is rooted in research on so-called perceptual hash functions for image comparison, at the interface of signal/image processing, pattern recognition, cryptography, and information theory. Such perceptual or robust image hashing—which, strictly speaking, is not truly cryptographic hashing—has extensive application in content authentication and information retrieval, database search, and security assurance. Applying and extending the principles of perceptual hashing to imaging for arms control, we propose techniques that are sensitive to altering, forging and tampering of the imaged object yet robust and tolerant to content-preserving image distortions and noise. Ensuring that the information contained in the hashed image data (available out-of-IB) cannot be used to extract sensitive information about the imaged object is of primary concern. Thus the techniques are characterized by high unpredictability to guarantee security. We will present an assessment of the performance of our techniques with respect to security, sensitivity and robustness on the basis of a methodical and mathematically precise framework.« less
Bin-Hash Indexing: A Parallel Method for Fast Query Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng
2008-06-27
This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not bemore » resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.« less
Limitations and requirements of content-based multimedia authentication systems
NASA Astrophysics Data System (ADS)
Wu, Chai W.
2001-08-01
Recently, a number of authentication schemes have been proposed for multimedia data such as images and sound data. They include both label based systems and semifragile watermarks. The main requirement for such authentication systems is that minor modifications such as lossy compression which do not alter the content of the data preserve the authenticity of the data, whereas modifications which do modify the content render the data not authentic. These schemes can be classified into two main classes depending on the model of image authentication they are based on. One of the purposes of this paper is to look at some of the advantages and disadvantages of these image authentication schemes and their relationship with fundamental limitations of the underlying model of image authentication. In particular, we study feature-based algorithms which generate an authentication tag based on some inherent features in the image such as the location of edges. The main disadvantage of most proposed feature-based algorithms is that similar images generate similar features, and therefore it is possible for a forger to generate dissimilar images that have the same features. On the other hand, the class of hash-based algorithms utilizes a cryptographic hash function or a digital signature scheme to reduce the data and generate an authentication tag. It inherits the security of digital signatures to thwart forgery attacks. The main disadvantage of hash-based algorithms is that the image needs to be modified in order to be made authenticatable. The amount of modification is on the order of the noise the image can tolerate before it is rendered inauthentic. The other purpose of this paper is to propose a multimedia authentication scheme which combines some of the best features of both classes of algorithms. The proposed scheme utilizes cryptographic hash functions and digital signature schemes and the data does not need to be modified in order to be made authenticatable. Several applications including the authentication of images on CD-ROM and handwritten documents will be discussed.
Efficient hash tables for network applications.
Zink, Thomas; Waldvogel, Marcel
2015-01-01
Hashing has yet to be widely accepted as a component of hard real-time systems and hardware implementations, due to still existing prejudices concerning the unpredictability of space and time requirements resulting from collisions. While in theory perfect hashing can provide optimal mapping, in practice, finding a perfect hash function is too expensive, especially in the context of high-speed applications. The introduction of hashing with multiple choices, d-left hashing and probabilistic table summaries, has caused a shift towards deterministic DRAM access. However, high amounts of rare and expensive high-speed SRAM need to be traded off for predictability, which is infeasible for many applications. In this paper we show that previous suggestions suffer from the false precondition of full generality. Our approach exploits four individual degrees of freedom available in many practical applications, especially hardware and high-speed lookups. This reduces the requirement of on-chip memory up to an order of magnitude and guarantees constant lookup and update time at the cost of only minute amounts of additional hardware. Our design makes efficient hash table implementations cheaper, more predictable, and more practical.
Speaker Linking and Applications using Non-Parametric Hashing Methods
2016-09-08
clustering method based on hashing—canopy- clustering . We apply this method to a large corpus of speaker recordings, demonstrate performance tradeoffs...and compare to other hash- ing methods. Index Terms: speaker recognition, clustering , hashing, locality sensitive hashing. 1. Introduction We assume...speaker in our corpus. Second, given a QBE method, how can we perform speaker clustering —each clustering should be a single speaker, and a cluster should
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Moon, Dukjae; Hong, Deukjo; Kwon, Daesung; Hong, Seokhie
We assume that the domain extender is the Merkle-Damgård (MD) scheme and he message is padded by a ‘1’, and minimum number of ‘0’s, followed by a fixed size length information so that the length of padded message is multiple of block length. Under this assumption, we analyze securities of the hash mode when the compression function follows the Davies-Meyer (DM) scheme and the underlying block cipher is one of the plain Feistel or Misty scheme or the generalized Feistel or Misty schemes with Substitution-Permutation (SP) round function. We do this work based on Meet-in-the-Middle (MitM) preimage attack techniques, and develop several useful initial structures.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
Semi-Supervised Geographical Feature Detection
NASA Astrophysics Data System (ADS)
Yu, H.; Yu, L.; Kuo, K. S.
2016-12-01
Extraction and tracking geographical features is a fundamental requirement in many geoscience fields. However, this operation has become an increasingly challenging task for domain scientists when tackling a large amount of geoscience data. Although domain scientists may have a relatively clear definition of features, it is difficult to capture the presence of features in an accurate and efficient fashion. We propose a semi-supervised approach to address large geographical feature detection. Our approach has two main components. First, we represent a heterogeneous geoscience data in a unified high-dimensional space, which can facilitate us to evaluate the similarity of data points with respect to geolocation, time, and variable values. We characterize the data from these measures, and use a set of hash functions to parameterize the initial knowledge of the data. Second, for any user query, our approach can automatically extract the initial results based on the hash functions. To improve the accuracy of querying, our approach provides a visualization interface to display the querying results and allow users to interactively explore and refine them. The user feedback will be used to enhance our knowledge base in an iterative manner. In our implementation, we use high-performance computing techniques to accelerate the construction of hash functions. Our design facilitates a parallelization scheme for feature detection and extraction, which is a traditionally challenging problem for large-scale data. We evaluate our approach and demonstrate the effectiveness using both synthetic and real world datasets.
Digital camera with apparatus for authentication of images produced from an image file
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1993-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.
Binary Multidimensional Scaling for Hashing.
Huang, Yameng; Lin, Zhouchen
2017-10-04
Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the oversimplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called Binary Multidimensional Scaling (BMDS), which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by Alternating Minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.
A dynamic re-partitioning strategy based on the distribution of key in Spark
NASA Astrophysics Data System (ADS)
Zhang, Tianyu; Lian, Xin
2018-05-01
Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.
NASA Astrophysics Data System (ADS)
Ahmadia, A. J.; Kees, C. E.
2014-12-01
Developing scientific software is a continuous balance between not reinventing the wheel and getting fragile codes to interoperate with one another. Binary software distributions such as Anaconda provide a robust starting point for many scientific software packages, but this solution alone is insufficient for many scientific software developers. HashDist provides a critical component of the development workflow, enabling highly customizable, source-driven, and reproducible builds for scientific software stacks, available from both the IPython Notebook and the command line. To address these issues, the Coastal and Hydraulics Laboratory at the US Army Engineer Research and Development Center has funded the development of HashDist in collaboration with Simula Research Laboratories and the University of Texas at Austin. HashDist is motivated by a functional approach to package build management, and features intelligent caching of sources and builds, parametrized build specifications, and the ability to interoperate with system compilers and packages. HashDist enables the easy specification of "software stacks", which allow both the novice user to install a default environment and the advanced user to configure every aspect of their build in a modular fashion. As an advanced feature, HashDist builds can be made relocatable, allowing the easy redistribution of binaries on all three major operating systems as well as cloud, and supercomputing platforms. As a final benefit, all HashDist builds are reproducible, with a build hash specifying exactly how each component of the software stack was installed. This talk discusses the role of HashDist in the hydrological sciences, including its use by the Coastal and Hydraulics Laboratory in the development and deployment of the Proteus Toolkit as well as the Rapid Operational Access and Maneuver Support project. We demonstrate HashDist in action, and show how it can effectively support development, deployment, teaching, and reproducibility for scientists working in the hydrological sciences. The HashDist documentation is available from: http://hashdist.readthedocs.org/en/latest/ HashDist is currently hosted at: https://github.com/hashdist/hashdist
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Cryptographic framework for document-objects resulting from multiparty collaborative transactions.
Goh, A
2000-01-01
Multiparty transactional frameworks--i.e. Electronic Data Interchange (EDI) or Health Level (HL) 7--often result in composite documents which can be accurately modelled using hyperlinked document-objects. The structural complexity arising from multiauthor involvement and transaction-specific sequencing would be poorly handled by conventional digital signature schemes based on a single evaluation of a one-way hash function and asymmetric cryptography. In this paper we outline the generation of structure-specific authentication hash-trees for the the authentication of transactional document-objects, followed by asymmetric signature generation on the hash-tree value. Server-side multi-client signature verification would probably constitute the single most compute-intensive task, hence the motivation for our usage of the Rabin signature protocol which results in significantly reduced verification workloads compared to the more commonly applied Rivest-Shamir-Adleman (RSA) protocol. Data privacy is handled via symmetric encryption of message traffic using session-specific keys obtained through key-negotiation mechanisms based on discrete-logarithm cryptography. Individual client-to-server channels can be secured using a double key-pair variation of Diffie-Hellman (DH) key negotiation, usage of which also enables bidirectional node authentication. The reciprocal server-to-client multicast channel is secured through Burmester-Desmedt (BD) key-negotiation which enjoys significant advantages over the usual multiparty extensions to the DH protocol. The implementation of hash-tree signatures and bi/multidirectional key negotiation results in a comprehensive cryptographic framework for multiparty document-objects satisfying both authentication and data privacy requirements.
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Croce Ferri, Lucilla
2003-06-01
Our paper addresses two issues of a biometric authentication algorithm for ID cardholders previously presented namely the security of the embedded reference data and the aging process of the biometric data. We describe a protocol that allows two levels of verification, combining a biometric hash technique based on handwritten signature and hologram watermarks with cryptographic signatures in a verification infrastructure. This infrastructure consists of a Trusted Central Public Authority (TCPA), which serves numerous Enrollment Stations (ES) in a secure environment. Each individual performs an enrollment at an ES, which provides the TCPA with the full biometric reference data and a document hash. The TCPA then calculates the authentication record (AR) with the biometric hash, a validity timestamp, and a document hash provided by the ES. The AR is then signed with a cryptographic signature function, initialized with the TCPA's private key and embedded in the ID card as a watermark. Authentication is performed at Verification Stations (VS), where the ID card will be scanned and the signed AR is retrieved from the watermark. Due to the timestamp mechanism and a two level biometric verification technique based on offline and online features, the AR can deal with the aging process of the biometric feature by forcing a re-enrollment of the user after expiry, making use of the ES infrastructure. We describe some attack scenarios and we illustrate the watermarking embedding, retrieval and dispute protocols, analyzing their requisites, advantages and disadvantages in relation to security requirements.
NASA Astrophysics Data System (ADS)
Siswantyo, Sepha; Susanti, Bety Hayat
2016-02-01
Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.
Hash Functions and Information Theoretic Security
NASA Astrophysics Data System (ADS)
Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.
Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.
Git as an Encrypted Distributed Version Control System
2015-03-01
options. The algorithm uses AES- 256 counter mode with an IV derived from SHA -1-HMAC hash (this is nearly identical to the GCM mode discussed earlier...built into the internal structure of Git. Every file in a Git repository is check summed with a SHA -1 hash, a one-way function with arbitrarily long...implementation. Git-encrypt calls OpenSSL cryptography library command line functions. The default cipher used is AES- 256 - Electronic Code Book (ECB), which is
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
Enhanced K-means clustering with encryption on cloud
NASA Astrophysics Data System (ADS)
Singh, Iqjot; Dwivedi, Prerna; Gupta, Taru; Shynu, P. G.
2017-11-01
This paper tries to solve the problem of storing and managing big files over cloud by implementing hashing on Hadoop in big-data and ensure security while uploading and downloading files. Cloud computing is a term that emphasis on sharing data and facilitates to share infrastructure and resources.[10] Hadoop is an open source software that gives us access to store and manage big files according to our needs on cloud. K-means clustering algorithm is an algorithm used to calculate distance between the centroid of the cluster and the data points. Hashing is a algorithm in which we are storing and retrieving data with hash keys. The hashing algorithm is called as hash function which is used to portray the original data and later to fetch the data stored at the specific key. [17] Encryption is a process to transform electronic data into non readable form known as cipher text. Decryption is the opposite process of encryption, it transforms the cipher text into plain text that the end user can read and understand well. For encryption and decryption we are using Symmetric key cryptographic algorithm. In symmetric key cryptography are using DES algorithm for a secure storage of the files. [3
Practical security and privacy attacks against biometric hashing using sparse recovery
NASA Astrophysics Data System (ADS)
Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan
2016-12-01
Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.
Forensic hash for multimedia information
NASA Astrophysics Data System (ADS)
Lu, Wenjun; Varna, Avinash L.; Wu, Min
2010-01-01
Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.
An image retrieval framework for real-time endoscopic image retargeting.
Ye, Menglong; Johns, Edward; Walter, Benjamin; Meining, Alexander; Yang, Guang-Zhong
2017-08-01
Serial endoscopic examinations of a patient are important for early diagnosis of malignancies in the gastrointestinal tract. However, retargeting for optical biopsy is challenging due to extensive tissue variations between examinations, requiring the method to be tolerant to these changes whilst enabling real-time retargeting. This work presents an image retrieval framework for inter-examination retargeting. We propose both a novel image descriptor tolerant of long-term tissue changes and a novel descriptor matching method in real time. The descriptor is based on histograms generated from regional intensity comparisons over multiple scales, offering stability over long-term appearance changes at the higher levels, whilst remaining discriminative at the lower levels. The matching method then learns a hashing function using random forests, to compress the string and allow for fast image comparison by a simple Hamming distance metric. A dataset that contains 13 in vivo gastrointestinal videos was collected from six patients, representing serial examinations of each patient, which includes videos captured with significant time intervals. Precision-recall for retargeting shows that our new descriptor outperforms a number of alternative descriptors, whilst our hashing method outperforms a number of alternative hashing approaches. We have proposed a novel framework for optical biopsy in serial endoscopic examinations. A new descriptor, combined with a novel hashing method, achieves state-of-the-art retargeting, with validation on in vivo videos from six patients. Real-time performance also allows for practical integration without disturbing the existing clinical workflow.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
NASA Astrophysics Data System (ADS)
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
Range image registration based on hash map and moth-flame optimization
NASA Astrophysics Data System (ADS)
Zou, Li; Ge, Baozhen; Chen, Lei
2018-03-01
Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.
A Complete and Accurate Ab Initio Repeat Finding Algorithm.
Lian, Shuaibin; Chen, Xinwu; Wang, Peng; Zhang, Xiaoli; Dai, Xianhua
2016-03-01
It has become clear that repetitive sequences have played multiple roles in eukaryotic genome evolution including increasing genetic diversity through mutation, changes in gene expression and facilitating generation of novel genes. However, identification of repetitive elements can be difficult in the ab initio manner. Currently, some classical ab initio tools of finding repeats have already presented and compared. The completeness and accuracy of detecting repeats of them are little pool. To this end, we proposed a new ab initio repeat finding tool, named HashRepeatFinder, which is based on hash index and word counting. Furthermore, we assessed the performances of HashRepeatFinder with other two famous tools, such as RepeatScout and Repeatfinder, in human genome data hg19. The results indicated the following three conclusions: (1) The completeness of HashRepeatFinder is the best one among these three compared tools in almost all chromosomes, especially in chr9 (8 times of RepeatScout, 10 times of Repeatfinder); (2) in terms of detecting large repeats, HashRepeatFinder also performed best in all chromosomes, especially in chr3 (24 times of RepeatScout and 250 times of Repeatfinder) and chr19 (12 times of RepeatScout and 60 times of Repeatfinder); (3) in terms of accuracy, HashRepeatFinder can merge the abundant repeats with high accuracy.
Best-First Heuristic Search for Multicore Machines
2010-01-01
Otto, 1998) to implement an asynchronous version of PRA* that they call Hash Distributed A* ( HDA *). HDA * distributes nodes using a hash function in...nodes which are being communicated between peers are in transit. In contact with the authors of HDA *, we have created an implementation of HDA * for...Also, our implementation of HDA * allows us to make a fair comparison between algorithms by sharing common data structures such as priority queues and
Ontology-Based Peer Exchange Network (OPEN)
ERIC Educational Resources Information Center
Dong, Hui
2010-01-01
In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…
Secure Minutiae-Based Fingerprint Templates Using Random Triangle Hashing
NASA Astrophysics Data System (ADS)
Jin, Zhe; Jin Teoh, Andrew Beng; Ong, Thian Song; Tee, Connie
Due to privacy concern on the widespread use of biometric authentication systems, biometric template protection has gained great attention in the biometric research recently. It is a challenging task to design a biometric template protection scheme which is anonymous, revocable and noninvertible while maintaining acceptable performance. Many methods have been proposed to resolve this problem, and cancelable biometrics is one of them. In this paper, we propose a scheme coined as Random Triangle Hashing which follows the concept of cancelable biometrics in the fingerprint domain. In this method, re-alignment of fingerprints is not required as all the minutiae are translated into a pre-defined 2 dimensional space based on a reference minutia. After that, the proposed Random Triangle hashing method is used to enforce the one-way property (non-invertibility) of the biometric template. The proposed method is resistant to minor translation error and rotation distortion. Finally, the hash vectors are converted into bit-strings to be stored in the database. The proposed method is evaluated using the public database FVC2004 DB1. An EER of less than 1% is achieved by using the proposed method.
Large-scale Cross-modality Search via Collective Matrix Factorization Hashing.
Ding, Guiguang; Guo, Yuchen; Zhou, Jile; Gao, Yue
2016-09-08
By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, etc) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark datasets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Amrish, E-mail: amrish99@gmail.com; Kaur, Sandeep, E-mail: sipusukhn@gmail.com; Mudahar, Isha, E-mail: isha@pbi.ac.in
We have investigated the structural and electronic properties of carbon nanotube with small fullerene halves C{sub n} (n ≤ 40) which are covalently bonded to the side wall of an armchair single wall carbon nanotube (SWCNT) using first principle method based on density functional theory. The fullerene size results in weak bonding between fullerene halves and carbon nanotube (CNT). Further, it was found that the C-C bond distance that attaches the fullerene half and CNT is of the order of 1.60 Å. The calculated binding energies indicate the stability of the complexes formed. The HOMO-LUMO gaps and electron density ofmore » state plots points towards the metallicity of the complex formed. Our calculations on charge transfer reveal that very small amount of charge is transferred from CNT to fullerene halves.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
AISL-CRYPTO is a library of cryptography functions supporting other AISL software. It provides various crypto functions for Common Lisp, including Digital Signature Algorithm, Data Encryption Standard, Secure Hash Algorithm, and public-key cryptography.
Simultenious binary hash and features learning for image retrieval
NASA Astrophysics Data System (ADS)
Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.
2016-05-01
Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.
FSH: fast spaced seed hashing exploiting adjacent hashes.
Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia
2018-01-01
Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Hash Bit Selection for Nearest Neighbor Search.
Xianglong Liu; Junfeng He; Shih-Fu Chang
2017-11-01
To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.
A one-time pad color image cryptosystem based on SHA-3 and multiple chaotic systems
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Wang, Siwei; Zhang, Yingqian; Luo, Chao
2018-04-01
A novel image encryption algorithm is proposed that combines the SHA-3 hash function and two chaotic systems: the hyper-chaotic Lorenz and Chen systems. First, 384 bit keystream hash values are obtained by applying SHA-3 to plaintext. The sensitivity of the SHA-3 algorithm and chaotic systems ensures the effect of a one-time pad. Second, the color image is expanded into three-dimensional space. During permutation, it undergoes plane-plane displacements in the x, y and z dimensions. During diffusion, we use the adjacent pixel dataset and corresponding chaotic value to encrypt each pixel. Finally, the structure of alternating between permutation and diffusion is applied to enhance the level of security. Furthermore, we design techniques to improve the algorithm's encryption speed. Our experimental simulations show that the proposed cryptosystem achieves excellent encryption performance and can resist brute-force, statistical, and chosen-plaintext attacks.
Matching CCD images to a stellar catalog using locality-sensitive hashing
NASA Astrophysics Data System (ADS)
Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu
2018-02-01
The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.
Novel Duplicate Address Detection with Hash Function
Song, GuangJia; Ji, ZhenZhou
2016-01-01
Duplicate address detection (DAD) is an important component of the address resolution protocol (ARP) and the neighbor discovery protocol (NDP). DAD determines whether an IP address is in conflict with other nodes. In traditional DAD, the target address to be detected is broadcast through the network, which provides convenience for malicious nodes to attack. A malicious node can send a spoofing reply to prevent the address configuration of a normal node, and thus, a denial-of-service attack is launched. This study proposes a hash method to hide the target address in DAD, which prevents an attack node from launching destination attacks. If the address of a normal node is identical to the detection address, then its hash value should be the same as the “Hash_64” field in the neighboring solicitation message. Consequently, DAD can be successfully completed. This process is called DAD-h. Simulation results indicate that address configuration using DAD-h has a considerably higher success rate when under attack compared with traditional DAD. Comparative analysis shows that DAD-h does not require third-party devices and considerable computing resources; it also provides a lightweight security resolution. PMID:26991901
Ordered versus Unordered Map for Primitive Data Types
2015-09-01
mapped to some element. C++ provides two types of map containers within the standard template library, the std ::map and the std ::unordered_map...classes. As the name implies, the containers main functional difference is that the elements in the std ::map are ordered by the key, and the std ...unordered_map are not ordered based on their key. The std ::unordered_map elements are placed into “buckets” based on a hash value computed for their key
Improving Sector Hash Carving with Rule-Based and Entropy-Based Non-Probative Block Filters
2015-03-01
0x20 exceeds the histogram rule’s threshold of 256 instances of a single 4-byte value. The 0x20 bytes are part of an Extensible Metadata Platform (XMP...block consists of data separated by NULL bytes of padding. The histogram rule is triggered for the block because the block contains more than 256 4...sdash can reduce the rate of false positive matches. After characteristic features have been selected, the features are hashed using SHA -1, which creates
2008-08-19
1 hash of the page page%d sha256 The segment for the SHA256 hash of the page Bad Sector Management: badsectors The number of sectors in the image...written, AFFLIB can automatically compute the page’s MD5, SHA-1, and/or SHA256 hash and write an associated segment containing the hash value. The...are written into segments themselves, with the segment name being name/ sha256 where name is the original segment name sha256 is the hash algorithm used
Double hashing technique in closed hashing search process
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Zulkarnain, Iskandar; Jaya, Hendra
2017-09-01
The search process is used in various activities performed both online and offline, many algorithms that can be used to perform the search process one of which is a hash search algorithm, search process with hash search algorithm used in this study using double hashing technique where the data will be formed into the table with same length and then search, the results of this study indicate that the search process with double hashing technique allows faster searching than the usual search techniques, this research allows to search the solution by dividing the value into the main table and overflow table so that the search process is expected faster than the data stacked in the form of one table and collision data could avoided.
NASA Astrophysics Data System (ADS)
Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin
2014-02-01
3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.
Deep classification hashing for person re-identification
NASA Astrophysics Data System (ADS)
Wang, Jiabao; Li, Yang; Zhang, Xiancai; Miao, Zhuang; Tao, Gang
2018-04-01
As the development of surveillance in public, person re-identification becomes more and more important. The largescale databases call for efficient computation and storage, hashing technique is one of the most important methods. In this paper, we proposed a new deep classification hashing network by introducing a new binary appropriation layer in the traditional ImageNet pre-trained CNN models. It outputs binary appropriate features, which can be easily quantized into binary hash-codes for hamming similarity comparison. Experiments show that our deep hashing method can outperform the state-of-the-art methods on the public CUHK03 and Market1501 datasets.
A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function
Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078
A robust and effective smart-card-based remote user authentication mechanism using hash function.
Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit
2014-01-01
In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.
Das, Ashok Kumar; Goswami, Adrijit
2014-06-01
Recently, Awasthi and Srivastava proposed a novel biometric remote user authentication scheme for the telecare medicine information system (TMIS) with nonce. Their scheme is very efficient as it is based on efficient chaotic one-way hash function and bitwise XOR operations. In this paper, we first analyze Awasthi-Srivastava's scheme and then show that their scheme has several drawbacks: (1) incorrect password change phase, (2) fails to preserve user anonymity property, (3) fails to establish a secret session key beween a legal user and the server, (4) fails to protect strong replay attack, and (5) lacks rigorous formal security analysis. We then a propose a novel and secure biometric-based remote user authentication scheme in order to withstand the security flaw found in Awasthi-Srivastava's scheme and enhance the features required for an idle user authentication scheme. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks, including the replay and man-in-the-middle attacks. Our scheme is also efficient as compared to Awasthi-Srivastava's scheme.
Manticore and CS mode : parallelizable encryption with joint cipher-state authentication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2004-10-01
We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof ofmore » security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.« less
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Method and system for analyzing and classifying electronic information
McGaffey, Robert W.; Bell, Michael Allen; Kortman, Peter J.; Wilson, Charles H.
2003-04-29
A data analysis and classification system that reads the electronic information, analyzes the electronic information according to a user-defined set of logical rules, and returns a classification result. The data analysis and classification system may accept any form of computer-readable electronic information. The system creates a hash table wherein each entry of the hash table contains a concept corresponding to a word or phrase which the system has previously encountered. The system creates an object model based on the user-defined logical associations, used for reviewing each concept contained in the electronic information in order to determine whether the electronic information is classified. The data analysis and classification system extracts each concept in turn from the electronic information, locates it in the hash table, and propagates it through the object model. In the event that the system can not find the electronic information token in the hash table, that token is added to a missing terms list. If any rule is satisfied during propagation of the concept through the object model, the electronic information is classified.
Adaptive Bloom Filter: A Space-Efficient Counting Algorithm for Unpredictable Network Traffic
NASA Astrophysics Data System (ADS)
Matsumoto, Yoshihide; Hazeyama, Hiroaki; Kadobayashi, Youki
The Bloom Filter (BF), a space-and-time-efficient hashcoding method, is used as one of the fundamental modules in several network processing algorithms and applications such as route lookups, cache hits, packet classification, per-flow state management or network monitoring. BF is a simple space-efficient randomized data structure used to represent a data set in order to support membership queries. However, BF generates false positives, and cannot count the number of distinct elements. A counting Bloom Filter (CBF) can count the number of distinct elements, but CBF needs more space than BF. We propose an alternative data structure of CBF, and we called this structure an Adaptive Bloom Filter (ABF). Although ABF uses the same-sized bit-vector used in BF, the number of hash functions employed by ABF is dynamically changed to record the number of appearances of a each key element. Considering the hash collisions, the multiplicity of a each key element on ABF can be estimated from the number of hash functions used to decode the membership of the each key element. Although ABF can realize the same functionality as CBF, ABF requires the same memory size as BF. We describe the construction of ABF and IABF (Improved ABF), and provide a mathematical analysis and simulation using Zipf's distribution. Finally, we show that ABF can be used for an unpredictable data set such as real network traffic.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the minimum amount of time. Given a list of numbers, try to find one or more solutions in which, if each number is compressed by use of the modulo function by some value, then a unique value is generated.
Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit
2015-09-01
The telecare medicine information system (TMIS) helps the patients to gain the health monitoring facility at home and access medical services over the Internet of mobile networks. Recently, Amin and Biswas presented a smart card based user authentication and key agreement security protocol usable for TMIS system using the cryptographic one-way hash function and biohashing function, and claimed that their scheme is secure against all possible attacks. Though their scheme is efficient due to usage of one-way hash function, we show that their scheme has several security pitfalls and design flaws, such as (1) it fails to protect privileged-insider attack, (2) it fails to protect strong replay attack, (3) it fails to protect strong man-in-the-middle attack, (4) it has design flaw in user registration phase, (5) it has design flaw in login phase, (6) it has design flaw in password change phase, (7) it lacks of supporting biometric update phase, and (8) it has flaws in formal security analysis. In order to withstand these security pitfalls and design flaws, we aim to propose a secure and robust user authenticated key agreement scheme for the hierarchical multi-server environment suitable in TMIS using the cryptographic one-way hash function and fuzzy extractor. Through the rigorous security analysis including the formal security analysis using the widely-accepted Burrows-Abadi-Needham (BAN) logic, the formal security analysis under the random oracle model and the informal security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme using the most-widely accepted and used Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. The simulation results show that our scheme is also secure. Our scheme is more efficient in computation and communication as compared to Amin-Biswas's scheme and other related schemes. In addition, our scheme supports extra functionality features as compared to other related schemes. As a result, our scheme is very appropriate for practical applications in TMIS.
A hash based mutual RFID tag authentication protocol in telecare medicine information system.
Srivastava, Keerti; Awasthi, Amit K; Kaul, Sonam D; Mittal, R C
2015-01-01
Radio Frequency Identification (RFID) is a technology which has multidimensional applications to reduce the complexity of today life. Everywhere, like access control, transportation, real-time inventory, asset management and automated payment systems etc., RFID has its enormous use. Recently, this technology is opening its wings in healthcare environments, where potential applications include patient monitoring, object traceability and drug administration systems etc. In this paper, we propose a secure RFID-based protocol for the medical sector. This protocol is based on hash operation with synchronized secret. The protocol is safe against active and passive attacks such as forgery, traceability, replay and de-synchronization attack.
HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual
1991-05-01
algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized
Chen, Huifang; Xie, Lei
2014-01-01
Self-healing group key distribution (SGKD) aims to deal with the key distribution problem over an unreliable wireless network. In this paper, we investigate the SGKD issue in resource-constrained wireless networks. We propose two improved SGKD schemes using the one-way hash chain (OHC) and the revocation polynomial (RP), the OHC&RP-SGKD schemes. In the proposed OHC&RP-SGKD schemes, by introducing the unique session identifier and binding the joining time with the capability of recovering previous session keys, the problem of the collusion attack between revoked users and new joined users in existing hash chain-based SGKD schemes is resolved. Moreover, novel methods for utilizing the one-way hash chain and constructing the personal secret, the revocation polynomial and the key updating broadcast packet are presented. Hence, the proposed OHC&RP-SGKD schemes eliminate the limitation of the maximum allowed number of revoked users on the maximum allowed number of sessions, increase the maximum allowed number of revoked/colluding users, and reduce the redundancy in the key updating broadcast packet. Performance analysis and simulation results show that the proposed OHC&RP-SGKD schemes are practical for resource-constrained wireless networks in bad environments, where a strong collusion attack resistance is required and many users could be revoked. PMID:25529204
Brodey, Benjamin B; Gonzalez, Nicole L; Elkin, Kathryn Ann; Sasiela, W Jordan; Brodey, Inger S
2017-09-06
The computerized administration of self-report psychiatric diagnostic and outcomes assessments has risen in popularity. If results are similar enough across different administration modalities, then new administration technologies can be used interchangeably and the choice of technology can be based on other factors, such as convenience in the study design. An assessment based on item response theory (IRT), such as the Patient-Reported Outcomes Measurement Information System (PROMIS) depression item bank, offers new possibilities for assessing the effect of technology choice upon results. To create equivalent halves of the PROMIS depression item bank and to use these halves to compare survey responses and user satisfaction among administration modalities-paper, mobile phone, or tablet-with a community mental health care population. The 28 PROMIS depression items were divided into 2 halves based on content and simulations with an established PROMIS response data set. A total of 129 participants were recruited from an outpatient public sector mental health clinic based in Memphis. All participants took both nonoverlapping halves of the PROMIS IRT-based depression items (Part A and Part B): once using paper and pencil, and once using either a mobile phone or tablet. An 8-cell randomization was done on technology used, order of technologies used, and order of PROMIS Parts A and B. Both Parts A and B were administered as fixed-length assessments and both were scored using published PROMIS IRT parameters and algorithms. All 129 participants received either Part A or B via paper assessment. Participants were also administered the opposite assessment, 63 using a mobile phone and 66 using a tablet. There was no significant difference in item response scores for Part A versus B. All 3 of the technologies yielded essentially identical assessment results and equivalent satisfaction levels. Our findings show that the PROMIS depression assessment can be divided into 2 equivalent halves, with the potential to simplify future experimental methodologies. Among community mental health care recipients, the PROMIS items function similarly whether administered via paper, tablet, or mobile phone. User satisfaction across modalities was also similar. Because paper, tablet, and mobile phone administrations yielded similar results, the choice of technology should be based on factors such as convenience and can even be changed during a study without adversely affecting the comparability of results. ©Benjamin B Brodey, Nicole L Gonzalez, Kathryn Ann Elkin, W Jordan Sasiela, Inger S Brodey. Originally published in JMIR Mental Health (http://mental.jmir.org), 06.09.2017.
Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D; Cui, Licong
2015-11-10
A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f1, f2, ..., fk. The input for each function fi has 3 components: a random number r, an integer n, and input data m. The result, fi(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f1(r1, n1, m1), f2(r2, n2, m2), ..., fk(rk, nk, mk). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash.
Digital data storage systems, computers, and data verification methods
Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.
2005-12-27
Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.
NASA Astrophysics Data System (ADS)
Yang, YuGuang; Zhang, YuChen; Xu, Gang; Chen, XiuBo; Zhou, Yi-Hua; Shi, WeiMin
2018-03-01
Li et al. first proposed a quantum hash function (QHF) in a quantum-walk architecture. In their scheme, two two-particle interactions, i.e., I interaction and π-phase interaction are introduced and the choice of I or π-phase interactions at each iteration depends on a message bit. In this paper, we propose an efficient QHF by dense coding of coin operators in discrete-time quantum walk. Compared with existing QHFs, our protocol has the following advantages: the efficiency of the QHF can be doubled and even more; only one particle is enough and two-particle interactions are unnecessary so that quantum resources are saved. It is a clue to apply the dense coding technique to quantum cryptographic protocols, especially to the applications with restricted quantum resources.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
Generating region proposals for histopathological whole slide image retrieval.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu; Shi, Jun
2018-06-01
Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems. Copyright © 2018 Elsevier B.V. All rights reserved.
Supervised graph hashing for histopathology image retrieval and classification.
Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin
2017-12-01
In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.
Security analysis of boolean algebra based on Zhang-Wang digital signature scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Jinbin, E-mail: jbzheng518@163.com
2014-10-06
In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software.
System using data compression and hashing adapted for use for multimedia encryption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffland, Douglas R
2011-07-12
A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.
Digital Camera with Apparatus for Authentication of Images Produced from an Image File
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1996-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Corned beef hash. 319.303 Section 319.303 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a...
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
Rotation invariant deep binary hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
Clamshell microwave cavities having a superconductive coating
Cooke, D. Wayne; Arendt, Paul N.; Piel, Helmut
1994-01-01
A microwave cavity including a pair of opposing clamshell halves, such halves comprised of a metal selected from the group consisting of silver, copper, or a silver-based alloy, wherein the cavity is further characterized as exhibiting a dominant TE.sub.011 mode is provided together with an embodiment wherein the interior concave surfaces of the clamshell halves are coated with a superconductive material. In the case of copper clamshell halves, the microwave cavity has a Q-value of about 1.2.times.10.sup.5 as measured at a temperature of 10K and a frequency of 10 GHz.
A Key Establishment Protocol for RFID User in IPTV Environment
NASA Astrophysics Data System (ADS)
Jeong, Yoon-Su; Kim, Yong-Tae; Sohn, Jae-Min; Park, Gil-Cheol; Lee, Sang-Ho
In recent years, the usage of IPTV (Internet Protocol Television) has been increased. The reason is a technological convergence of broadcasting and telecommunication delivering interactive applications and multimedia content through high speed Internet connections. The main critical point of IPTV security requirements is subscriber authentication. That is, IPTV service should have the capability to identify the subscribers to prohibit illegal access. Currently, IPTV service does not provide a sound authentication mechanism to verify the identity of its wireless users (or devices). This paper focuses on a lightweight authentication and key establishment protocol based on the use of hash functions. The proposed approach provides effective authentication for a mobile user with a RFID tag whose authentication information is communicated back and forth with the IPTV authentication server via IPTV set-top box (STB). That is, the proposed protocol generates user's authentication information that is a bundle of two public keys derived from hashing user's private keys and RFID tag's session identifier, and adds 1bit to this bundled information for subscriber's information confidentiality before passing it to the authentication server.
Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L
2006-12-01
Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.
Compact binary hashing for music retrieval
NASA Astrophysics Data System (ADS)
Seo, Jin S.
2014-03-01
With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David
2017-01-01
Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong
2017-02-17
As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.
Performance of hashed cache data migration schemes on multicomputers
NASA Technical Reports Server (NTRS)
Hiranandani, Seema; Saltz, Joel; Mehrotra, Piyush; Berryman, Harry
1991-01-01
After conducting an examination of several data-migration mechanisms which permit an explicit and controlled mapping of data to memory, a set of schemes for storage and retrieval of off-processor array elements is experimentally evaluated and modeled. All schemes considered have their basis in the use of hash tables for efficient access of nonlocal data. The techniques in question are those of hashed cache, partial enumeration, and full enumeration; in these, nonlocal data are stored in hash tables, so that the operative difference lies in the amount of memory used by each scheme and in the retrieval mechanism used for nonlocal data.
Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition
NASA Astrophysics Data System (ADS)
Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua
2018-04-01
Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.
Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D
2015-01-01
Background A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. Objective This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. Methods NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f 1, f 2, ..., f k. The input for each function f i has 3 components: a random number r, an integer n, and input data m. The result, f i(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f 1(r 1, n 1, m 1), f 2(r 2, n 2, m 2), ..., f k(r k, n k, m k). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. Results We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). Conclusions The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash. PMID:26554419
Planetary Nebula Candidates Uncovered with the HASH Research Platform
NASA Astrophysics Data System (ADS)
Fragkou, Vasiliki; Bojičić, Ivan; Frew, David; Parker, Quentin
2017-10-01
A detailed examination of new high quality radio catalogues (e.g. Cornish) in combination with available mid-infrared (MIR) satellite imagery (e.g. Glimpse) has allowed us to find 70 new planetary nebula (PN) candidates based on existing knowledge of their typical colors and fluxes. To further examine the nature of these sources, multiple diagnostic tools have been applied to these candidates based on published data and on available imagery in the HASH (Hong Kong/ AAO/ Strasbourg Hα planetary nebula) research platform. Some candidates have previously-missed optical counterparts allowing for spectroscopic follow-up. Indeed, the single object spectroscopically observed so far has turned out to be a bona fide PN.
Microbiological quality of five potato products obtained at retail markets.
Duran, A P; Swartzentruber, A; Lanier, J M; Wentz, B A; Schwab, A H; Barnard, R J; Read, R B
1982-01-01
The microbiological quality of frozen hash brown potatoes, dried hash brown potatoes with onions, frozen french fried potatoes, dried instant mashed potatoes, and potato salad was determined by a national sampling at the retail level. A wide range of results was obtained, with most sampling units of each products having excellent microbiological quality. Geometric mean aerobic plate counts were as follows: dried hash brown potatoes, 270/g; frozen hash brown potatoes with onions, 580/g; frozen french fried potatoes 78/g; dried instant mashed potatoes, 1.1 x 10(3)/g; and potato salad, 3.6 x 10(3)/g. Mean values of coliforms, Escherichia coli, and Staphylococcus aureus were less than 10/g. PMID:6758695
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
Multicollision attack on CBC-MAC, EMAC, and XCBC-MAC of AES-128 algorithm
NASA Astrophysics Data System (ADS)
Brolin Sihite, Alfonso; Hayat Susanti, Bety
2017-10-01
A Message Authentication Codes (MAC) can be constructed based on a block cipher algorithm. CBC-MAC, EMAC, and XCBC-MAC constructions are some of MAC schemes that used in the hash function. In this paper, we do multicollision attack on CBC-MAC, EMAC, and XCBC-MAC construction which uses AES-128 block cipher algorithm as basic construction. The method of multicollision attack utilizes the concept of existential forgery on CBC-MAC. The results show that the multicollision can be obtained easily in CBC-MAC, EMAC, and XCBC-MAC construction.
Unified Communications: Simplifying DoD Communication Methods
2013-04-18
private key to encrypt the hash. The encrypted hash, together with some other information, such as the hashing algorithm , is known as a digital...virtual private network (VPN). The use of a VPN would allow users to access corporate data while encrypting traffic.35 Another layer of protection would...sign and encrypt emails as well as controlling access to restricted sites. PKI uses a combination of public and private keys for encryption and
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
HIA: a genome mapper using hybrid index-based sequence alignment.
Choi, Jongpill; Park, Kiejung; Cho, Seong Beom; Chung, Myungguen
2015-01-01
A number of alignment tools have been developed to align sequencing reads to the human reference genome. The scale of information from next-generation sequencing (NGS) experiments, however, is increasing rapidly. Recent studies based on NGS technology have routinely produced exome or whole-genome sequences from several hundreds or thousands of samples. To accommodate the increasing need of analyzing very large NGS data sets, it is necessary to develop faster, more sensitive and accurate mapping tools. HIA uses two indices, a hash table index and a suffix array index. The hash table performs direct lookup of a q-gram, and the suffix array performs very fast lookup of variable-length strings by exploiting binary search. We observed that combining hash table and suffix array (hybrid index) is much faster than the suffix array method for finding a substring in the reference sequence. Here, we defined the matching region (MR) is a longest common substring between a reference and a read. And, we also defined the candidate alignment regions (CARs) as a list of MRs that is close to each other. The hybrid index is used to find candidate alignment regions (CARs) between a reference and a read. We found that aligning only the unmatched regions in the CAR is much faster than aligning the whole CAR. In benchmark analysis, HIA outperformed in mapping speed compared with the other aligners, without significant loss of mapping accuracy. Our experiments show that the hybrid of hash table and suffix array is useful in terms of speed for mapping NGS sequencing reads to the human reference genome sequence. In conclusion, our tool is appropriate for aligning massive data sets generated by NGS sequencing.
Bian, Shaoquan; He, Mengmeng; Sui, Junhui; Cai, Hanxu; Sun, Yong; Liang, Jie; Fan, Yujiang; Zhang, Xingdong
2016-04-01
Although the disulfide bond crosslinked hyaluronic acid hydrogels have been reported by many research groups, the major researches were focused on effectively forming hydrogels. However, few researchers paid attention to the potential significance of controlling the hydrogel formation and degradation, improving biocompatibility, reducing the toxicity of exogenous and providing convenience to the clinical operations later on. In this research, the novel controllable self-crosslinking smart hydrogels with in-situ gelation property was prepared by a single component, the thiolated hyaluronic acid derivative (HA-SH), and applied as a three-dimensional scaffold to mimic native extracellular matrix (ECM) for the culture of fibroblasts cells (L929) and chondrocytes. A series of HA-SH hydrogels were prepared depending on different degrees of thiol substitution (ranging from 10 to 60%) and molecule weights of HA (0.1, 0.3 and 1.0 MDa). The gelation time, swelling property and smart degradation behavior of HA-SH hydrogel were evaluated. The results showed that the gelation and degradation time of hydrogels could be controlled by adjusting the component of HA-SH polymers. The storage modulus of HA-SH hydrogels obtained by dynamic modulus analysis (DMA) could be up to 44.6 kPa. In addition, HA-SH hydrogels were investigated as a three-dimensional scaffold for the culture of fibroblasts cells (L929) and chondrocytes cells in vitro and as an injectable hydrogel for delivering chondrocytes cells in vivo. These results illustrated that HA-SH hydrogels with controllable gelation process, intelligent degradation behavior, excellent biocompatibility and convenient operational characteristics supplied potential clinical application capacity for tissue engineering and regenerative medicine. Copyright © 2016 Elsevier B.V. All rights reserved.
1987-10-01
Meatsauce Rissole Potatoes Turkey Nuggets (I/o) Hash Browned Potatoes (I/o) Mashed Potatoes Buttered Mixed Vegetables Toasted Garlic Bread Brussels...Chicken Curry Baked Ham/P/A Sauce Parsley Buttered Potatoes Brown Gravy Hash Browned Potatoes (I/o) Steamed Rice Steamed Carrots Mashed Potatoes...Steak Mashed Potatoes Mashed Potatoes Rissole Potatoes Steamed Rice Hash Browned Potatoes (I/o) Green Beans Steamed Carrots Broccoli w/Cheese sauce
Optimal hash arrangement of tentacles in jellyfish
NASA Astrophysics Data System (ADS)
Okabe, Takuya; Yoshimura, Jin
2016-06-01
At first glance, the trailing tentacles of a jellyfish appear to be randomly arranged. However, close examination of medusae has revealed that the arrangement and developmental order of the tentacles obey a mathematical rule. Here, we show that medusa jellyfish adopt the best strategy to achieve the most uniform distribution of a variable number of tentacles. The observed order of tentacles is a real-world example of an optimal hashing algorithm known as Fibonacci hashing in computer science.
Internet traffic load balancing using dynamic hashing with flow volume
NASA Astrophysics Data System (ADS)
Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.
2002-07-01
Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.
Adoption of the Hash algorithm in a conceptual model for the civil registry of Ecuador
NASA Astrophysics Data System (ADS)
Toapanta, Moisés; Mafla, Enrique; Orizaga, Antonio
2018-04-01
The Hash security algorithm was analyzed in order to mitigate information security in a distributed architecture. The objective of this research is to develop a prototype for the Adoption of the algorithm Hash in a conceptual model for the Civil Registry of Ecuador. The deductive method was used in order to analyze the published articles that have a direct relation with the research project "Algorithms and Security Protocols for the Civil Registry of Ecuador" and articles related to the Hash security algorithm. It resulted from this research: That the SHA-1 security algorithm is appropriate for use in Ecuador's civil registry; we adopted the SHA-1 algorithm used in the flowchart technique and finally we obtained the adoption of the hash algorithm in a conceptual model. It is concluded that from the comparison of the DM5 and SHA-1 algorithm, it is suggested that in the case of an implementation, the SHA-1 algorithm is taken due to the amount of information and data available from the Civil Registry of Ecuador; It is determined that the SHA-1 algorithm that was defined using the flowchart technique can be modified according to the requirements of each institution; the model for adopting the hash algorithm in a conceptual model is a prototype that can be modified according to all the actors that make up each organization.
An effective and secure key-management scheme for hierarchical access control in E-medicine system.
Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit
2013-04-01
Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.
Handwriting: Feature Correlation Analysis for Biometric Hashes
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Steinmetz, Ralf
2004-12-01
In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.
Data Collision Prevention with Overflow Hashing Technique in Closed Hash Searching Process
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Nurjamiyah; Rafika Dewi, Arie
2017-12-01
Hash search is a method that can be used for various search processes such as search engines, sorting, machine learning, neural network and so on, in the search process the possibility of collision data can happen and to prevent the occurrence of collision can be done in several ways one of them is to use Overflow technique, the use of this technique perform with varying length of data and this technique can prevent the occurrence of data collisions.
2012-09-01
relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software
9 CFR 319.303 - Corned beef hash.
Code of Federal Regulations, 2010 CFR
2010-01-01
... combination, are salt, sugar (sucrose or dextrose), spice, and flavoring, including essential oils, oleoresins, and other spice extractives. (b) Corned beef hash may contain one or more of the following optional...
High-performance sparse matrix-matrix products on Intel KNL and multicore architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagasaka, Y; Matsuoka, S; Azad, A
Sparse matrix-matrix multiplication (SpGEMM) is a computational primitive that is widely used in areas ranging from traditional numerical applications to recent big data analysis and machine learning. Although many SpGEMM algorithms have been proposed, hardware specific optimizations for multi- and many-core processors are lacking and a detailed analysis of their performance under various use cases and matrices is not available. We firstly identify and mitigate multiple bottlenecks with memory management and thread scheduling on Intel Xeon Phi (Knights Landing or KNL). Specifically targeting multi- and many-core processors, we develop a hash-table-based algorithm and optimize a heap-based shared-memory SpGEMM algorithm. Wemore » examine their performance together with other publicly available codes. Different from the literature, our evaluation also includes use cases that are representative of real graph algorithms, such as multi-source breadth-first search or triangle counting. Our hash-table and heap-based algorithms are showing significant speedups from libraries in the majority of the cases while different algorithms dominate the other scenarios with different matrix size, sparsity, compression factor and operation type. We wrap up in-depth evaluation results and make a recipe to give the best SpGEMM algorithm for target scenario. A critical finding is that hash-table-based SpGEMM gets a significant performance boost if the nonzeros are not required to be sorted within each row of the output matrix.« less
Modeling and Simulation of the Economics of Mining in the Bitcoin Market.
Cocco, Luisanna; Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the "Bitcoin Blockchain", creating the first block of the chain hashing on his computer's central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU's generation. They are GPU's, FPGA's and ASIC's generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU's generation, the first with economic significance. The model reproduces some "stylized facts" found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network.
Provably secure Rabin-p cryptosystem in hybrid setting
NASA Astrophysics Data System (ADS)
Asbullah, Muhammad Asyraf; Ariffin, Muhammad Rezal Kamel
2016-06-01
In this work, we design an efficient and provably secure hybrid cryptosystem depicted by a combination of the Rabin-p cryptosystem with an appropriate symmetric encryption scheme. We set up a hybrid structure which is proven secure in the sense of indistinguishable against the chosen-ciphertext attack. We presume that the integer factorization problem is hard and the hash function that modeled as a random function.
Changes in Benthos Associated with Mussel (Mytilus edulis L.) Farms on the West-Coast of Scotland
Wilding, Thomas A.; Nickell, Thomas D.
2013-01-01
Aquaculture, as a means of food production, is growing rapidly in response to an increasing demand for protein and the over-exploitation of wild fisheries. This expansion includes mussels (family Mytilidae) where production currently stands at 1.5 million tonnes per annum. Mussel culture is frequently perceived as having little environmental impact yet mussel biodeposits and shell debris accumulate around the production site and are linked to changes in the benthos. To assess the extent and nature of changes in benthos associated with mussel farming grab and video sampling around seven mussel farms was conducted. Grab samples were analysed for macrofauna and shell-hash content whilst starfish were counted and the shell-hash cover estimated from video imaging. Shell-hash was patchily distributed and occasionally dominated sediments (maximum of 2116 g per 0.1 m2 grab). Mean shell-hash content decreased rapidly at distances >5 m from the line and, over the distance 1–64 m, decreased by three orders of magnitude. The presence of shell-hash and the distance-from-line influenced macrofaunal assemblages but this effect differed between sites. There was no evidence that mussel farming was associated with changes in macrobenthic diversity, species count or feeding strategy. However, total macrofaunal count was estimated to be 2.5 times higher in close proximity to the lines, compared with 64 m distance, and there was evidence that this effect was conditional on the presence of shell-hash. Starfish density varied considerably between sites but, overall, they were approximately 10 times as abundant close to the mussel-lines compared with 64 m distance. There was no evidence that starfish were more abundant in the presence of shell-hash visible on the sediment surface. In terms of farm-scale benthic impacts these data suggest that mussel farming is a relatively benign way of producing food, compared with intensive fish-farming, in similar environments. PMID:23874583
Exploiting the HASH Planetary Nebula Research Platform
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan; Frew, David J.
2017-10-01
The HASH (Hong Kong/ AAO/ Strasbourg/ Hα) planetary nebula research platform is a unique data repository with a graphical interface and SQL capability that offers the community powerful, new ways to undertake Galactic PN studies. HASH currently contains multi-wavelength images, spectra, positions, sizes, morphologies and other data whenever available for 2401 true, 447 likely, and 692 possible Galactic PNe, for a total of 3540 objects. An additional 620 Galactic post-AGB stars, pre-PNe, and PPN candidates are included. All objects were classified and evaluated following the precepts and procedures established and developed by our group over the last 15 years. The complete database contains over 6,700 Galactic objects including the many mimics and related phenomena previously mistaken or confused with PNe. Curation and updating currently occurs on a weekly basis to keep the repository as up to date as possible until the official release of HASH v1 planned in the near future.
Comparison of Grouping Methods for Template Extraction from VA Medical Record Text.
Redd, Andrew M; Gundlapalli, Adi V; Divita, Guy; Tran, Le-Thuy; Pettey, Warren B P; Samore, Matthew H
2017-01-01
We investigate options for grouping templates for the purpose of template identification and extraction from electronic medical records. We sampled a corpus of 1000 documents originating from Veterans Health Administration (VA) electronic medical record. We grouped documents through hashing and binning tokens (Hashed) as well as by the top 5% of tokens identified as important through the term frequency inverse document frequency metric (TF-IDF). We then compared the approaches on the number of groups with 3 or more and the resulting longest common subsequences (LCSs) common to all documents in the group. We found that the Hashed method had a higher success rate for finding LCSs, and longer LCSs than the TF-IDF method, however the TF-IDF approach found more groups than the Hashed and subsequently more long sequences, however the average length of LCSs were lower. In conclusion, each algorithm appears to have areas where it appears to be superior.
Comparison of Various Similarity Measures for Average Image Hash in Mobile Phone Application
NASA Astrophysics Data System (ADS)
Farisa Chaerul Haviana, Sam; Taufik, Muhammad
2017-04-01
One of the main issue in Content Based Image Retrieval (CIBR) is similarity measures for resulting image hashes. The main key challenge is to find the most benefits distance or similarity measures for calculating the similarity in term of speed and computing costs, specially under limited computing capabilities device like mobile phone. This study we utilize twelve most common and popular distance or similarity measures technique implemented in mobile phone application, to be compared and studied. The results show that all similarity measures implemented in this study was perform equally under mobile phone application. This gives more possibilities for method combinations to be implemented for image retrieval.
Tag Content Access Control with Identity-based Key Exchange
NASA Astrophysics Data System (ADS)
Yan, Liang; Rong, Chunming
2010-09-01
Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.
NASA Astrophysics Data System (ADS)
Irving, D. H.; Rasheed, M.; O'Doherty, N.
2010-12-01
The efficient storage, retrieval and interactive use of subsurface data present great challenges in geodata management. Data volumes are typically massive, complex and poorly indexed with inadequate metadata. Derived geomodels and interpretations are often tightly bound in application-centric and proprietary formats; open standards for long-term stewardship are poorly developed. Consequently current data storage is a combination of: complex Logical Data Models (LDMs) based on file storage formats; 2D GIS tree-based indexing of spatial data; and translations of serialised memory-based storage techniques into disk-based storage. Whilst adequate for working at the mesoscale over a short timeframes, these approaches all possess technical and operational shortcomings: data model complexity; anisotropy of access; scalability to large and complex datasets; and weak implementation and integration of metadata. High performance hardware such as parallelised storage and Relational Database Management System (RDBMS) have long been exploited in many solutions but the underlying data structure must provide commensurate efficiencies to allow multi-user, multi-application and near-realtime data interaction. We present an open Spatially-Registered Data Structure (SRDS) built on Massively Parallel Processing (MPP) database architecture implemented by a ANSI SQL 2008 compliant RDBMS. We propose a LDM comprising a 3D Earth model that is decomposed such that each increasing Level of Detail (LoD) is achieved by recursively halving the bin size until it is less than the error in each spatial dimension for that data point. The value of an attribute at that point is stored as a property of that point and at that LoD. It is key to the numerical efficiency of the SRDS that it is under-pinned by a power-of-two relationship thus precluding the need for computationally intensive floating point arithmetic. Our approach employed a tightly clustered MPP array with small clusters of storage, processors and memory communicating over a high-speed network inter-connect. This is a shared-nothing architecture where resources are managed within each cluster unlike most other RDBMSs. Data are accessed on this architecture by their primary index values which utilises the hashing algorithm for point-to-point access. The hashing algorithm’s main role is the efficient distribution of data across the clusters based on the primary index. In this study we used 3D seismic volumes, 2D seismic profiles and borehole logs to demonstrate application in both (x,y,TWT) and (x,y,z)-space. In the SRDS the primary index is a composite column index of (x,y) to avoid invoking time-consuming full table scans as is the case in tree-based systems. This means that data access is isotropic. A query for data in a specified spatial range permits retrieval recursively by point-to-point queries within each nested LoD yielding true linear performance up to the Petabyte scale with hardware scaling presenting the primary limiting factor. Our architecture and LDM promotes: realtime interaction with massive data volumes; streaming of result sets and server-rendered 2D/3D imagery; rigorous workflow control and auditing; and in-database algorithms run directly against data as a HPC cloud service.
7 CFR 51.1437 - Size classifications for halves.
Code of Federal Regulations, 2014 CFR
2014-01-01
... halves per pound shall be based upon the weight of half-kernels after all pieces, particles and dust... specified range. (d) Tolerances for pieces, particles, and dust. In order to allow for variations incident..., particles, and dust: Provided, That not more than one-third of this amount, or 5 percent, shall be allowed...
7 CFR 51.1437 - Size classifications for halves.
Code of Federal Regulations, 2013 CFR
2013-01-01
... halves per pound shall be based upon the weight of half-kernels after all pieces, particles and dust... specified range. (d) Tolerances for pieces, particles, and dust. In order to allow for variations incident..., particles, and dust: Provided, That not more than one-third of this amount, or 5 percent, shall be allowed...
Nijboer, Tanja C W; Gebuis, Titia; te Pas, Susan F; van der Smagt, Maarten J
2011-01-01
We investigated whether simultaneous colour contrast affects the synaesthetic colour experience and normal colour percept in a similar manner. We simultaneously presented a target stimulus (i.e. grapheme) and a reference stimulus (i.e. hash). Either the grapheme or the hash was presented on a saturated background of the same or opposite colour category as the synaesthetic colour and the other stimulus on a grey background. In both conditions, grapheme-colour synaesthetes were asked to colour the hash in a colour similar to the synaesthetic colour of the grapheme. Controls that were pair-matched to the synaesthetes performed the same experiment, but for them, the grapheme was presented in the colour induced by the grapheme in synaesthetes. When graphemes were presented on a grey and the hash on a coloured background, a traditional simultaneous colour-contrast effect was found for controls as well as synaesthetes. When graphemes were presented on colour and the hash on grey, the controls again showed a traditional simultaneous colour-contrast effect, whereas the synaesthetes showed the opposite effect. Our results show that synaesthetic colour experiences differ from normal colour perception; both are susceptible to different surrounding colours, but not in a comparable manner. Copyright © 2010 Elsevier Ltd. All rights reserved.
Improving the Rainbow Attack by Reusing Colours
NASA Astrophysics Data System (ADS)
Ågren, Martin; Johansson, Thomas; Hell, Martin
Hashing or encrypting a key or a password is a vital part in most network security protocols. The most practical generic attack on such schemes is a time memory trade-off attack. Such an attack inverts any one-way function using a trade-off between memory and execution time. Existing techniques include the Hellman attack and the rainbow attack, where the latter uses different reduction functions ("colours") within a table.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market
Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the “Bitcoin Blockchain”, creating the first block of the chain hashing on his computer’s central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU’s generation. They are GPU’s, FPGA’s and ASIC’s generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU’s generation, the first with economic significance. The model reproduces some “stylized facts” found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network. PMID:27768691
Quantum Authencryption with Two-Photon Entangled States for Off-Line Communicants
NASA Astrophysics Data System (ADS)
Ye, Tian-Yu
2016-02-01
In this paper, a quantum authencryption protocol is proposed by using the two-photon entangled states as the quantum resource. Two communicants Alice and Bob share two private keys in advance, which determine the generation of two-photon entangled states. The sender Alice sends the two-photon entangled state sequence encoded with her classical bits to the receiver Bob in the manner of one-step quantum transmission. Upon receiving the encoded quantum state sequence, Bob decodes out Alice's classical bits with the two-photon joint measurements and authenticates the integrity of Alice's secret with the help of one-way hash function. The proposed protocol only uses the one-step quantum transmission and needs neither a public discussion nor a trusted third party. As a result, the proposed protocol can be adapted to the case where the receiver is off-line, such as the quantum E-mail systems. Moreover, the proposed protocol provides the message authentication to one bit level with the help of one-way hash function and has an information-theoretical efficiency equal to 100 %.
A strategy to load balancing for non-connectivity MapReduce job
NASA Astrophysics Data System (ADS)
Zhou, Huaping; Liu, Guangzong; Gui, Haixia
2017-09-01
MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.
Timsit, Youri; Bombard, Sophie
2007-12-01
Metal ions play a key role in RNA folding and activity. Elucidating the rules that govern the binding of metal ions is therefore an essential step for better understanding the RNA functions. High-resolution data are a prerequisite for a detailed structural analysis of ion binding on RNA and, in particular, the observation of monovalent cations. Here, the high-resolution crystal structures of the tridecamer duplex r(GCGUUUGAAACGC) crystallized under different conditions provides new structural insights on ion binding on GAAA/UUU sequences that exhibit both unusual structural and functional properties in RNA. The present study extends the repertory of RNA ion binding sites in showing that the two first bases of UUU triplets constitute a specific site for sodium ions. A striking asymmetric pattern of metal ion binding in the two equivalent halves of the palindromic sequence demonstrates that sequence and its environment act together to bind metal ions. A highly ionophilic half that binds six metal ions allows, for the first time, the observation of a disodium cluster in RNA. The comparison of the equivalent halves of the duplex provides experimental evidences that ion binding correlates with structural alterations and groove contraction.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Canned, Frozen, or Dehydrated Meat Food... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Hash. 319.302 Section 319.302 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION...
Code of Federal Regulations, 2010 CFR
2010-01-01
... CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Canned, Frozen, or Dehydrated Meat Food... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Hash. 319.302 Section 319.302 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION...
A hybrid cloud read aligner based on MinHash and kmer voting that preserves privacy
NASA Astrophysics Data System (ADS)
Popic, Victoria; Batzoglou, Serafim
2017-05-01
Low-cost clouds can alleviate the compute and storage burden of the genome sequencing data explosion. However, moving personal genome data analysis to the cloud can raise serious privacy concerns. Here, we devise a method named Balaur, a privacy preserving read mapper for hybrid clouds based on locality sensitive hashing and kmer voting. Balaur can securely outsource a substantial fraction of the computation to the public cloud, while being highly competitive in accuracy and speed with non-private state-of-the-art read aligners on short read data. We also show that the method is significantly faster than the state of the art in long read mapping. Therefore, Balaur can enable institutions handling massive genomic data sets to shift part of their analysis to the cloud without sacrificing accuracy or exposing sensitive information to an untrusted third party.
A hybrid cloud read aligner based on MinHash and kmer voting that preserves privacy
Popic, Victoria; Batzoglou, Serafim
2017-01-01
Low-cost clouds can alleviate the compute and storage burden of the genome sequencing data explosion. However, moving personal genome data analysis to the cloud can raise serious privacy concerns. Here, we devise a method named Balaur, a privacy preserving read mapper for hybrid clouds based on locality sensitive hashing and kmer voting. Balaur can securely outsource a substantial fraction of the computation to the public cloud, while being highly competitive in accuracy and speed with non-private state-of-the-art read aligners on short read data. We also show that the method is significantly faster than the state of the art in long read mapping. Therefore, Balaur can enable institutions handling massive genomic data sets to shift part of their analysis to the cloud without sacrificing accuracy or exposing sensitive information to an untrusted third party. PMID:28508884
Reneker, Jeff; Shyu, Chi-Ren; Zeng, Peiyu; Polacco, Joseph C.; Gassmann, Walter
2004-01-01
We have developed a web server for the life sciences community to use to search for short repeats of DNA sequence of length between 3 and 10 000 bases within multiple species. This search employs a unique and fast hash function approach. Our system also applies information retrieval algorithms to discover knowledge of cross-species conservation of repeat sequences. Furthermore, we have incorporated a part of the Gene Ontology database into our information retrieval algorithms to broaden the coverage of the search. Our web server and tutorial can be found at http://acmes.rnet.missouri.edu. PMID:15215469
A Novel Fast and Secure Approach for Voice Encryption Based on DNA Computing
NASA Astrophysics Data System (ADS)
Kakaei Kate, Hamidreza; Razmara, Jafar; Isazadeh, Ayaz
2018-06-01
Today, in the world of information communication, voice information has a particular importance. One way to preserve voice data from attacks is voice encryption. The encryption algorithms use various techniques such as hashing, chaotic, mixing, and many others. In this paper, an algorithm is proposed for voice encryption based on three different schemes to increase flexibility and strength of the algorithm. The proposed algorithm uses an innovative encoding scheme, the DNA encryption technique and a permutation function to provide a secure and fast solution for voice encryption. The algorithm is evaluated based on various measures including signal to noise ratio, peak signal to noise ratio, correlation coefficient, signal similarity and signal frequency content. The results demonstrate applicability of the proposed method in secure and fast encryption of voice files
Non-Black-Box Simulation from One-Way Functions and Applications to Resettable Security
2012-11-05
from 2001, Barak (FOCS’01) introduced a novel non-black-box simulation technique. This technique enabled the construc- tion of new cryptographic...primitives, such as resettably-sound zero-knowledge arguments, that cannot be proven secure using just black-box simulation techniques. The work of Barak ... Barak requires the existence of collision-resistant hash functions, and a very recent result by Bitansky and Paneth (FOCS’12) instead requires the
Pereira de Sousa, Irene; Suchaoin, Wongsakorn; Zupančič, Ožbej; Leichner, Christina; Bernkop-Schnürch, Andreas
2016-11-05
It is the aim of this study to synthesize hyaluronic acid (HA) derivatives bearing mucoadhesive properties and showing prolonged stability at pH 7.4 and under oxidative condition as liquid dosage form. HA was modified by thiolation with l-cysteine (HA-SH) and by conjugation with 2-mercaptonicotinic acid-l-cysteine ligand to obtain an S-protected derivative (HA-MNA). The polymers were characterized by determination of thiol group content and mercaptonicotinic acid content. Cytotoxicity, stability and mucoadhesive properties (rheological evaluation and tensile test) of the polymers were evaluated. HA-SH and HA-MNA could be successfully synthesized with a degree of modification of 5% and 9% of the total moles of carboxylic acid groups, respectively. MTT assay revealed no toxicity for the polymers. HA-SH resulted to be unstable both at pH 7.4 and under oxidative conditions, whereas HA-MNA was stable under both conditions. Rheological assessment showed a 52-fold and a 3-fold increase in viscosity for HA-MNA incubated with mucus compared to unmodified HA and HA-SH, respectively. Tensile evaluation carried out with intestinal and conjunctival mucosa confirmed the higher mucoadhesive properties of HA-MNA compared to HA-SH. According to the presented results, HA-MNA appears to be a potent excipient for the formulation of stable liquid dosage forms showing comparatively high mucodhesive properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2017-05-01
This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.
Wang, Zhe; Quinn, Paul C; Jin, Haiyang; Sun, Yu-Hao P; Tanaka, James W; Pascalis, Olivier; Lee, Kang
2018-04-25
Using a composite-face paradigm, we examined the holistic processing induced by Asian faces, Caucasian faces, and monkey faces with human Asian participants in two experiments. In Experiment 1, participants were asked to judge whether the upper halves of two faces successively presented were the same or different. A composite-face effect was found for Asian faces and Caucasian faces, but not for monkey faces. In Experiment 2, participants were asked to judge whether the lower halves of the two faces successively presented were the same or different. A composite-face effect was found for monkey faces as well as for Asian faces and Caucasian faces. Collectively, these results reveal that own-species (i.e., own-race and other-race) faces engage holistic processing in both upper and lower halves of the face, but other-species (i.e., monkey) faces engage holistic processing only when participants are asked to match the lower halves of the face. The findings are discussed in the context of a region-based holistic processing account for the species-specific effect in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.
Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing
Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin
2016-01-01
A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410
Using the Hill Cipher to Teach Cryptographic Principles
ERIC Educational Resources Information Center
McAndrew, Alasdair
2008-01-01
The Hill cipher is the simplest example of a "block cipher," which takes a block of plaintext as input, and returns a block of ciphertext as output. Although it is insecure by modern standards, its simplicity means that it is well suited for the teaching of such concepts as encryption modes, and properties of cryptographic hash functions. Although…
Network-Aware Mechanisms for Tolerating Byzantine Failures in Distributed Systems
2012-01-01
In Digest, we use SHA-256 as the collision-resistant hash function. We use the realization of SHA-256 from the OpenSSL toolkit [48]. The results...vol. 46, pp. 372–378, 1997. [48] “ Openssl project.” [Online]. Available: http://www.openssl.org/ 154 [49] M. J. Fischer, N. A. Lynch, and M. Merritt
Das, Ashok Kumar
2015-03-01
An integrated EPR (Electronic Patient Record) information system of all the patients provides the medical institutions and the academia with most of the patients' information in details for them to make corrective decisions and clinical decisions in order to maintain and analyze patients' health. In such system, the illegal access must be restricted and the information from theft during transmission over the insecure Internet must be prevented. Lee et al. proposed an efficient password-based remote user authentication scheme using smart card for the integrated EPR information system. Their scheme is very efficient due to usage of one-way hash function and bitwise exclusive-or (XOR) operations. However, in this paper, we show that though their scheme is very efficient, their scheme has three security weaknesses such as (1) it has design flaws in password change phase, (2) it fails to protect privileged insider attack and (3) it lacks the formal security verification. We also find that another recently proposed Wen's scheme has the same security drawbacks as in Lee at al.'s scheme. In order to remedy these security weaknesses found in Lee et al.'s scheme and Wen's scheme, we propose a secure and efficient password-based remote user authentication scheme using smart cards for the integrated EPR information system. We show that our scheme is also efficient as compared to Lee et al.'s scheme and Wen's scheme as our scheme only uses one-way hash function and bitwise exclusive-or (XOR) operations. Through the security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks.
A comparative study of Message Digest 5(MD5) and SHA256 algorithm
NASA Astrophysics Data System (ADS)
Rachmawati, D.; Tarigan, J. T.; Ginting, A. B. C.
2018-03-01
The document is a collection of written or printed data containing information. The more rapid advancement of technology, the integrity of a document should be kept. Because of the nature of an open document means the document contents can be read and modified by many parties so that the integrity of the information as a content of the document is not preserved. To maintain the integrity of the data, it needs to create a mechanism which is called a digital signature. A digital signature is a specific code which is generated from the function of producing a digital signature. One of the algorithms that used to create the digital signature is a hash function. There are many hash functions. Two of them are message digest 5 (MD5) and SHA256. Those both algorithms certainly have its advantages and disadvantages of each. The purpose of this research is to determine the algorithm which is better. The parameters which used to compare that two algorithms are the running time and complexity. The research results obtained from the complexity of the Algorithms MD5 and SHA256 is the same, i.e., ⊖ (N), but regarding the speed is obtained that MD5 is better compared to SHA256.
Does Temporal Integration of Face Parts Reflect Holistic Processing?
Cheung, Olivia S.; Richler, Jennifer J.; Phillips, W. Stewart; Gauthier, Isabel
2011-01-01
We examined whether temporal integration of face parts reflects holistic processing or response interference. Participants learned to name two faces “Fred” and two “Bob”. At test, top and bottom halves of different faces formed composites and were presented briefly separated in time. Replicating prior findings (Singer & Sheinberg, 2006), naming of the target halves for aligned composites was slowed when the irrelevant halves were from faces with a different name compared to that from the original face. However, no interference was observed when the irrelevant halves had identical names as the target halves but came from different learned faces, arguing against a true holistic effect. Instead, response interference was obtained when the target halves briefly preceded the irrelevant halves. Experiment 2 confirmed a double-dissociation between holistic processing vs. response interference for intact faces vs. temporally separated face halves, suggesting that simultaneous presentation of facial information is critical for holistic processing. PMID:21327378
In Vitro and Ex Vivo Evaluation of Novel Curcumin-Loaded Excipient for Buccal Delivery.
Laffleur, Flavia; Schmelzle, Franziska; Ganner, Ariane; Vanicek, Stefan
2017-08-01
This study aimed to develop a mucoadhesive polymeric excipient comprising curcumin for buccal delivery. Curcumin encompasses broad range of benefits such as antioxidant, anti-inflammatory, and chemotherapeutic activity. Hyaluronic acid (HA) as polymeric excipient was modified by immobilization of thiol bearing ligands. L-Cysteine (SH) ethyl ester was covalently attached via amide bond formation between cysteine and the carboxylic moiety of hyaluronic acid. Succeeded synthesis was proved by H-NMR and IR spectra. The obtained thiolated polymer hyaluronic acid ethyl ester (HA-SH) was evaluated in terms of stability, safety, mucoadhesiveness, drug release, and permeation-enhancing properties. HA-SH showed 2.75-fold higher swelling capacity over time in comparison to unmodified polymer. Furthermore, mucoadhesion increased 3.4-fold in case of HA-SH and drug release was increased 1.6-fold versus HA control, respectively. Curcumin-loaded HA-SH exhibits a 4.4-fold higher permeation compared with respective HA. Taking these outcomes in consideration, novel curcumin-loaded excipient, namely thiolated hyaluronic acid ethyl ester appears as promising tool for pharyngeal diseases.
[PREPARATION AND BIOCOMPATIBILITY OF IN SITU CROSSLINKING HYALURONIC ACID HYDROGEL].
Liang, Jiabi; Li, Jun; Wang, Ting; Liang, Yuhong; Zou, Xuenong; Zhou, Guangqian; Zhou, Zhiyu
2016-06-08
To fabricate in situ crosslinking hyaluronic acid hydrogel and evaluate its biocompatibility in vitro. The acrylic acid chloride and polyethylene glycol were added to prepare crosslinking agent polyethylene glycol acrylate (PEGDA), and the molecular structure of PEGDA was analyzed by Flourier transformation infrared spectroscopy and 1H nuclear magnetic resonance spectroscopy. Hyaluronic acid hydrogel was chemically modified to prepare hyaluronic acid thiolation (HA-SH). And the degree of HA-SH was analyzed qualitatively and quantitatively by Ellman method. HA-SH solution in concentrations ( W/V ) of 0.5%, 1.0%, and 1.5% and PEGDA solution in concentrations ( W/V ) of 2%, 4%, and 6% were prepared with PBS. The two solutions were mixed in different ratios, and in situ crosslinking hyaluronic acid hydrogel was obtained; the crosslinking time was recorded. The cellular toxicity of in situ crosslinking hyaluronic acid hydrogel (1.5% HA-SH and 4% PEGDA mixed) was tested by L929 cells. Meanwhile, the biocompatibility of hydrogel was tested by co-cultured with human bone mesenchymal stem cells (hBMSCs). Flourier transformation infrared spectroscopy showed that most hydroxyl groups were replaced by acrylate groups; 1H nuclear magnetic resonance spectroscopy showed 3 characteristic peaks of hydrogen representing acrylate and olefinic bond at 5-7 ppm. The thiolation yield of HA-SH was 65.4%. In situ crosslinking time of hyaluronic acid hydrogel was 2 to 70 minutes in the PEGDA concentrations of 2%-6% and HA-SH concentrations of 0.5%-1.5%. The hyaluronic acid hydrogel appeared to be transparent. The toxicity grade of leaching solution of hydrogel was grade 1. hBMSCs grew well and distributed evenly in hydrogel with a very high viability. In situ crosslinking hyaluronic acid hydrogel has low cytotoxicity, good biocompatibility, and controllable crosslinking time, so it could be used as a potential tissue engineered scaffold or repairing material for tissue regeneration.
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
The biogenesis pathway of tRNA-derived piRNAs in Bombyx germ cells
Honda, Shozo; Kawamura, Takuya; Loher, Phillipe; Morichika, Keisuke; Rigoutsos, Isidore
2017-01-01
Abstract Transfer RNAs (tRNAs) function in translational machinery and further serves as a source of short non-coding RNAs (ncRNAs). tRNA-derived ncRNAs show differential expression profiles and play roles in many biological processes beyond translation. Molecular mechanisms that shape and regulate their expression profiles are largely unknown. Here, we report the mechanism of biogenesis for tRNA-derived Piwi-interacting RNAs (td-piRNAs) expressed in Bombyx BmN4 cells. In the cells, two cytoplasmic tRNA species, tRNAAspGUC and tRNAHisGUG, served as major sources for td-piRNAs, which were derived from the 5′-part of the respective tRNAs. cP-RNA-seq identified the two tRNAs as major substrates for the 5′-tRNA halves as well, suggesting a previously uncharacterized link between 5′-tRNA halves and td-piRNAs. An increase in levels of the 5′-tRNA halves, induced by BmNSun2 knockdown, enhanced the td-piRNA expression levels without quantitative change in mature tRNAs, indicating that 5′-tRNA halves, not mature tRNAs, are the direct precursors for td-piRNAs. For the generation of tRNAHisGUG-derived piRNAs, BmThg1l-mediated nucleotide addition to −1 position of tRNAHisGUG was required, revealing an important function of BmThg1l in piRNA biogenesis. Our study advances the understanding of biogenesis mechanisms and the genesis of specific expression profiles for tRNA-derived ncRNAs. PMID:28645172
Running Clubs--A Combinatorial Investigation.
ERIC Educational Resources Information Center
Nissen, Phillip; Taylor, John
1991-01-01
Presented is a combinatorial problem based on the Hash House Harriers rule which states that the route of the run should not have previously been traversed by the club. Discovered is how many weeks the club can meet before the rule has to be broken. (KR)
Research on target tracking algorithm based on spatio-temporal context
NASA Astrophysics Data System (ADS)
Li, Baiping; Xu, Sanmei; Kang, Hongjuan
2017-07-01
In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.
A Study of Gaps in Cyber Defense Automation
2015-10-18
to lowercase. Next, the normalized file is tokenized using an n -length window. These n -tokens are then hashed into a Bloom filter for that file. In the...readily available to most developers. However, with a complexity of O(N2) where N is the number of files (or the number of functions if using function...that measure the impact of each feature on the website’s chances of becoming compromised, and the top N features are submitted to the classification
Santos, Bruna Parapinski; Souza, Fernando Nogueira; Blagitz, Maiara Garcia; Batista, Camila Freitas; Bertagnon, Heloísa Godoi; Diniz, Soraia Araújo; Silva, Marcos Xavier; Haddad, João Paulo Amaral; Della Libera, Alice Maria Melville Paiva
2017-06-01
The exact influence of caprine arthritis encephalitis virus (CAEV) infection on blood and milk polymorphonuclear leukocytes (PMNLs) and monocyte/macrophages of goats remains unclear. Thus, the present study sought to explore the blood and milk PMNL and monocyte/macrophage functions in naturally CAEV-infected goats. The present study used 18 healthy Saanen goats that were segregated according to sera test outcomes into serologically CAEV negative (n=8; 14 halves) and positive (n=10; 14 halves) groups. All milk samples from mammary halves with milk bacteriologically positive outcomes, somatic cell count ≥2×10 6 cellsmL -1 , and abnormal secretions in the strip cup test were excluded. We evaluated the percentage of blood and milk PMNLs and monocyte/macrophages, the viability of PMNLs and monocyte/macrophages, the levels of intracellular reactive oxygen species (ROS) and the nonopsonized phagocytosis of Staphylococcus aureus and Escherichia coli by flow cytometry. In the present study, a higher percentage of milk macrophages (CD14 + ) and milk polymorphonuclear leukocytes undergoing late apoptosis or necrosis (Annexin-V + /Propidium iodide + ) was observed in CAEV-infected goats; we did not find any further alterations in blood and milk PMNL and monocyte/macrophage functions. Thus, regarding our results, the goats naturally infected with CAEV did not reveal pronounced dysfunctions in blood and milk polymorphonuclear leukocytes and monocytes/macrophages. Copyright © 2017 Elsevier B.V. All rights reserved.
Semantic Segmentation of Building Elements Using Point Cloud Hashing
NASA Astrophysics Data System (ADS)
Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.
2018-05-01
For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Tuple spaces in hardware for accelerated implicit routing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Zachary Kent; Tripp, Justin
2010-12-01
Organizing and optimizing data objects on networks with support for data migration and failing nodes is a complicated problem to handle as systems grow. The goal of this work is to demonstrate that high levels of speedup can be achieved by moving responsibility for finding, fetching, and staging data into an FPGA-based network card. We present a system for implicit routing of data via FPGA-based network cards. In this system, data structures are requested by name, and the network of FPGAs finds the data within the network and relays the structure to the requester. This is acheived through successive examinationmore » of hardware hash tables implemented in the FPGA. By avoiding software stacks between nodes, the data is quickly fetched entirely through FPGA-FPGA interaction. The performance of this system is orders of magnitude faster than software implementations due to the improved speed of the hash tables and lowered latency between the network nodes.« less
A novel image retrieval algorithm based on PHOG and LSH
NASA Astrophysics Data System (ADS)
Wu, Hongliang; Wu, Weimin; Peng, Jiajin; Zhang, Junyuan
2017-08-01
PHOG can describe the local shape of the image and its relationship between the spaces. The using of PHOG algorithm to extract image features in image recognition and retrieval and other aspects have achieved good results. In recent years, locality sensitive hashing (LSH) algorithm has been superior to large-scale data in solving near-nearest neighbor problems compared with traditional algorithms. This paper presents a novel image retrieval algorithm based on PHOG and LSH. First, we use PHOG to extract the feature vector of the image, then use L different LSH hash table to reduce the dimension of PHOG texture to index values and map to different bucket, and finally extract the corresponding value of the image in the bucket for second image retrieval using Manhattan distance. This algorithm can adapt to the massive image retrieval, which ensures the high accuracy of the image retrieval and reduces the time complexity of the retrieval. This algorithm is of great significance.
A Proposal for Kelly CriterionBased Lossy Network Compression
2016-03-01
warehousing and data mining techniques for cyber security. New York (NY): Springer; 2007. p. 83–108. 34. Münz G, Li S, Carle G. Traffic anomaly...p. 188–196. 48. Kim NU, Park MW, Park SH, Jung SM, Eom JH, Chung TM. A study on ef- fective hash-based load balancing scheme for parallel nids. In
Tangible interactive system for document browsing and visualisation of multimedia data
NASA Astrophysics Data System (ADS)
Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry
2006-01-01
In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.
SnapDock—template-based docking by Geometric Hashing
Estrin, Michael; Wolfson, Haim J.
2017-01-01
Abstract Motivation: A highly efficient template-based protein–protein docking algorithm, nicknamed SnapDock, is presented. It employs a Geometric Hashing-based structural alignment scheme to align the target proteins to the interfaces of non-redundant protein–protein interface libraries. Docking of a pair of proteins utilizing the 22 600 interface PIFACE library is performed in < 2 min on the average. A flexible version of the algorithm allowing hinge motion in one of the proteins is presented as well. Results: To evaluate the performance of the algorithm a blind re-modelling of 3547 PDB complexes, which have been uploaded after the PIFACE publication has been performed with success ratio of about 35%. Interestingly, a similar experiment with the template free PatchDock docking algorithm yielded a success rate of about 23% with roughly 1/3 of the solutions different from those of SnapDock. Consequently, the combination of the two methods gave a 42% success ratio. Availability and implementation: A web server of the application is under development. Contact: michaelestrin@gmail.com or wolfson@tau.ac.il PMID:28881968
A new method of cannabis ingestion: the dangers of dabs?
Loflin, Mallory; Earleywine, Mitch
2014-10-01
A new method for administering cannabinoids, called butane hash oil ("dabs"), is gaining popularity among marijuana users. Despite press reports that suggest that "dabbing" is riskier than smoking flower cannabis, no data address whether dabs users experience more problems from use than those who prefer flower cannabis. The present study aimed to gather preliminary information on dabs users and test whether dabs use is associated with more problems than using flower cannabis. Participants (n=357) reported on their history of cannabis use, their experience with hash oil and the process of "dabbing," reasons for choosing "dabs" over other methods, and any problems related to both flower cannabis and butane hash oil. Analyses revealed that using "dabs" created no more problems or accidents than using flower cannabis. Participants did report that "dabs" led to higher tolerance and withdrawal (as defined by the participants), suggesting that the practice might be more likely to lead to symptoms of addiction or dependence. The use of butane hash oil has spread outside of the medical marijuana community, and users view it as significantly more dangerous than other forms of cannabis use. Published by Elsevier Ltd.
Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan S.; Frew, David J.
2016-07-01
By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.
Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association
2011-03-24
based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently
Significance of cannabis use to dental practice.
Maloney, William James
2011-04-01
The illicit use of the three main forms of cannabis-marijuana, hash, hash oil-pose certain obstacles and challenges to the dental professional. There are a number of systemic, as well as oral/head and neck manifestations, associated with cannabis use. Dentists need to be aware of these manifestations in order to take whatever precautions and/or modifications to the proposed treatment that might be necessary.
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
An Intelligent Web-Based System for Diagnosing Student Learning Problems Using Concept Maps
ERIC Educational Resources Information Center
Acharya, Anal; Sinha, Devadatta
2017-01-01
The aim of this article is to propose a method for development of concept map in web-based environment for identifying concepts a student is deficient in after learning using traditional methods. Direct Hashing and Pruning algorithm was used to construct concept map. Redundancies within the concept map were removed to generate a learning sequence.…
Felgueiras, Helena P; Wang, L M; Ren, K F; Querido, M M; Jin, Q; Barbosa, M A; Ji, J; Martins, M C L
2017-03-08
Infection and thrombus formation are still the biggest challenges for the success of blood contact medical devices. This work aims the development of an antimicrobial and hemocompatible biomaterial coating through which selective binding of albumin (passivant protein) from the bloodstream is promoted and, thus, adsorption of other proteins responsible for bacterial adhesion and thrombus formation can be prevented. Polyurethane (PU) films were coated with hyaluronic acid, an antifouling agent, that was previously modified with thiol groups (HA-SH), using polydopamine as the binding agent. Octadecyl acrylate (C18) was used to attract albumin since it resembles the circulating free fatty acids and albumin is a fatty acid transporter. Thiol-ene "click chemistry" was explored for C18 immobilization on HA-SH through a covalent bond between the thiol groups from the HA and the alkene groups from the C18 chains. Surfaces were prepared with different C18 concentrations (0, 5, 10, and 20%) and successful immobilization was demonstrated by scanning electron microscopy (SEM), water contact angle determinations, X-ray photoelectron spectroscopy (XPS) and Fourier transform infrared spectroscopy (FTIR). The ability of surfaces to bind albumin selectively was determined by quartz crystal microbalance with dissipation (QCM-D). Albumin adsorption increased in response to the hydrophobic nature of the surfaces, which augmented with C18 saturation. HA-SH coating reduced albumin adsorption to PU. C18 immobilized onto HA-SH at 5% promoted selective binding of albumin, decreased Staphylococcus aureus adhesion and prevented platelet adhesion and activation to PU in the presence of human plasma. C18/HA-SH coating was established as an innovative and promising strategy to improve the antimicrobial properties and hemocompatibility of any blood contact medical device.
QKD-Based Secured Burst Integrity Design for Optical Burst Switched Networks
NASA Astrophysics Data System (ADS)
Balamurugan, A. M.; Sivasubramanian, A.; Parvathavarthini, B.
2016-03-01
The field of optical transmission has undergone numerous advancements and is still being researched mainly due to the fact that optical data transmission can be done at enormous speeds. It is quite evident that people prefer optical communication when it comes to large amount of data involving its transmission. The concept of switching in networks has matured enormously with several researches, architecture to implement and methods starting with Optical circuit switching to Optical Burst Switching. Optical burst switching is regarded as viable solution for switching bursts over networks but has several security vulnerabilities. However, this work exploited the security issues associated with Optical Burst Switching with respect to integrity of burst. This proposed Quantum Key based Secure Hash Algorithm (QKBSHA-512) with enhanced compression function design provides better avalanche effect over the conventional integrity algorithms.
Genomics-Based Security Protocols: From Plaintext to Cipherprotein
NASA Technical Reports Server (NTRS)
Shaw, Harry; Hussein, Sayed; Helgert, Hermann
2011-01-01
The evolving nature of the internet will require continual advances in authentication and confidentiality protocols. Nature provides some clues as to how this can be accomplished in a distributed manner through molecular biology. Cryptography and molecular biology share certain aspects and operations that allow for a set of unified principles to be applied to problems in either venue. A concept for developing security protocols that can be instantiated at the genomics level is presented. A DNA (Deoxyribonucleic acid) inspired hash code system is presented that utilizes concepts from molecular biology. It is a keyed-Hash Message Authentication Code (HMAC) capable of being used in secure mobile Ad hoc networks. It is targeted for applications without an available public key infrastructure. Mechanics of creating the HMAC are presented as well as a prototype HMAC protocol architecture. Security concepts related to the implementation differences between electronic domain security and genomics domain security are discussed.
Hughes, Richard John; Thrasher, James Thomas; Nordholt, Jane Elizabeth
2016-11-29
Innovations for quantum key management harness quantum communications to form a cryptography system within a public key infrastructure framework. In example implementations, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a Merkle signature scheme (using Winternitz one-time digital signatures or other one-time digital signatures, and Merkle hash trees) to constitute a cryptography system. More generally, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a hash-based signature scheme. This provides a secure way to identify, authenticate, verify, and exchange secret cryptographic keys. Features of the quantum key management innovations further include secure enrollment of users with a registration authority, as well as credential checking and revocation with a certificate authority, where the registration authority and/or certificate authority can be part of the same system as a trusted authority for quantum key distribution.
Architectural design of an Algol interpreter
NASA Technical Reports Server (NTRS)
Jackson, C. K.
1971-01-01
The design of a syntax-directed interpreter for a subset of Algol is described. It is a conceptual design with sufficient details and completeness but as much independence of implementation as possible. The design includes a detailed description of a scanner, an analyzer described in the Floyd-Evans productions, a hash-coded symbol table, and an executor. Interpretation of sample programs is also provided to show how the interpreter functions.
Two Improved Access Methods on Compact Binary (CB) Trees.
ERIC Educational Resources Information Center
Shishibori, Masami; Koyama, Masafumi; Okada, Makoto; Aoe, Jun-ichi
2000-01-01
Discusses information retrieval and the use of binary trees as a fast access method for search strategies such as hashing. Proposes new methods based on compact binary trees that provide faster access and more compact storage, explains the theoretical basis, and confirms the validity of the methods through empirical observations. (LRW)
A Tree Locality-Sensitive Hash for Secure Software Testing
2017-09-14
errors, or to look for vulnerabilities that could allow a nefarious actor to use our software against us. Ultimately, all testing is designed to find...and an equivalent number of feasible paths discovered by Klee. 1.5 Summary This document the Tree Locality-Sensitive Hash (TLSH), a locality-senstive...performing two groups of tests that verify the accuracy and usefulness of TLSH. Chapter 5 summarizes the contents of the dissertation and lists avenues
NASA Astrophysics Data System (ADS)
Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.
1998-04-01
The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.
ProGeRF: Proteome and Genome Repeat Finder Utilizing a Fast Parallel Hash Function
Moraes, Walas Jhony Lopes; Rodrigues, Thiago de Souza; Bartholomeu, Daniella Castanheira
2015-01-01
Repetitive element sequences are adjacent, repeating patterns, also called motifs, and can be of different lengths; repetitions can involve their exact or approximate copies. They have been widely used as molecular markers in population biology. Given the sizes of sequenced genomes, various bioinformatics tools have been developed for the extraction of repetitive elements from DNA sequences. However, currently available tools do not provide options for identifying repetitive elements in the genome or proteome, displaying a user-friendly web interface, and performing-exhaustive searches. ProGeRF is a web site for extracting repetitive regions from genome and proteome sequences. It was designed to be efficient, fast, and accurate and primarily user-friendly web tool allowing many ways to view and analyse the results. ProGeRF (Proteome and Genome Repeat Finder) is freely available as a stand-alone program, from which the users can download the source code, and as a web tool. It was developed using the hash table approach to extract perfect and imperfect repetitive regions in a (multi)FASTA file, while allowing a linear time complexity. PMID:25811026
A new pre-classification method based on associative matching method
NASA Astrophysics Data System (ADS)
Katsuyama, Yutaka; Minagawa, Akihiro; Hotta, Yoshinobu; Omachi, Shinichiro; Kato, Nei
2010-01-01
Reducing the time complexity of character matching is critical to the development of efficient Japanese Optical Character Recognition (OCR) systems. To shorten processing time, recognition is usually split into separate preclassification and recognition stages. For high overall recognition performance, the pre-classification stage must both have very high classification accuracy and return only a small number of putative character categories for further processing. Furthermore, for any practical system, the speed of the pre-classification stage is also critical. The associative matching (AM) method has often been used for fast pre-classification, because its use of a hash table and reliance solely on logical bit operations to select categories makes it highly efficient. However, redundant certain level of redundancy exists in the hash table because it is constructed using only the minimum and maximum values of the data on each axis and therefore does not take account of the distribution of the data. We propose a modified associative matching method that satisfies the performance criteria described above but in a fraction of the time by modifying the hash table to reflect the underlying distribution of training characters. Furthermore, we show that our approach outperforms pre-classification by clustering, ANN and conventional AM in terms of classification accuracy, discriminative power and speed. Compared to conventional associative matching, the proposed approach results in a 47% reduction in total processing time across an evaluation test set comprising 116,528 Japanese character images.
Kilb, Debi; Hardebeck, J.L.
2006-01-01
We estimate the strike and dip of three California fault segments (Calaveras, Sargent, and a portion of the San Andreas near San Jaun Bautistia) based on principle component analysis of accurately located microearthquakes. We compare these fault orientations with two different first-motion focal mechanism catalogs: the Northern California Earthquake Data Center (NCEDC) catalog, calculated using the FPFIT algorithm (Reasenberg and Oppenheimer, 1985), and a catalog created using the HASH algorithm that tests mechanism stability relative to seismic velocity model variations and earthquake location (Hardebeck and Shearer, 2002). We assume any disagreement (misfit >30° in strike, dip, or rake) indicates inaccurate focal mechanisms in the catalogs. With this assumption, we can quantify the parameters that identify the most optimally constrained focal mechanisms. For the NCEDC/FPFIT catalogs, we find that the best quantitative discriminator of quality focal mechanisms is the station distribution ratio (STDR) parameter, an indicator of how the stations are distributed about the focal sphere. Requiring STDR > 0.65 increases the acceptable mechanisms from 34%–37% to 63%–68%. This suggests stations should be uniformly distributed surrounding, rather than aligning, known fault traces. For the HASH catalogs, the fault plane uncertainty (FPU) parameter is the best discriminator, increasing the percent of acceptable mechanisms from 63%–78% to 81%–83% when FPU ≤ 35°. The overall higher percentage of acceptable mechanisms and the usefulness of the formal uncertainty in identifying quality mechanisms validate the HASH approach of testing for mechanism stability.
ERIC Educational Resources Information Center
Herrmann, Ned
1982-01-01
Outlines the differences between left-brain and right-brain functioning and between left-brain and right-brain dominant individuals, and concludes that creativity uses both halves of the brain. Discusses how both students and curriculum can become more "whole-brained." (Author/JM)
Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Niu, Xiamu
2016-01-01
This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.
Stable carbon and oxygen isotope record of central Lake Erie sediments
Tevesz, M.J.S.; Spongberg, A.L.; Fuller, J.A.
1998-01-01
Stable carbon and oxygen isotope data from mollusc aragonite extracted from sediment cores provide new information on the origin and history of sedimentation in the southwestern area of the central basin of Lake Erie. Sediments infilling the Sandusky subbasin consist of three lithologic units overlying glacial deposits. The lowest of these is a soft gray mud overlain by a shell hash layer containing Sphaerium striatinum fragments. A fluid mud unit caps the shell hash layer and extends upwards to the sediment-water interface. New stable isotope data suggest that the soft gray mud unit is of postglacial, rather than proglacial, origin. These data also suggest that the shell hash layer was derived from erosional winnowing of the underlying soft gray mud layer. This winnowing event may have occurred as a result of the Nipissing flood. The Pelee-Lorain moraine, which forms the eastern boundary of the Sandusky subbasin, is an elevated area of till capped by a sand deposit that originated as a beach. The presence of both the shell hash layer and relict beach deposit strengthens the interpretation that the Nipissing flood was a critical event in the development of the southwestern area of the central basin of Lake Erie. This event, which returned drainage from the upper lakes to the Lake Erie basin, was a dominant influence on regional stratigraphy, bathymetry, and depositional setting.
Authenticity techniques for PACS images and records
NASA Astrophysics Data System (ADS)
Wong, Stephen T. C.; Abundo, Marco; Huang, H. K.
1995-05-01
Along with the digital radiology environment supported by picture archiving and communication systems (PACS) comes a new problem: How to establish trust in multimedia medical data that exist only in the easily altered memory of a computer. Trust is characterized in terms of integrity and privacy of digital data. Two major self-enforcing techniques can be used to assure the authenticity of electronic images and text -- key-based cryptography and digital time stamping. Key-based cryptography associates the content of an image with the originator using one or two distinct keys and prevents alteration of the document by anyone other than the originator. A digital time stamping algorithm generates a characteristic `digital fingerprint' for the original document using a mathematical hash function, and checks that it has not been modified. This paper discusses these cryptographic algorithms and their appropriateness for a PACS environment. It also presents experimental results of cryptographic algorithms on several imaging modalities.
Safety Modification of Cam-and-Groove Hose Coupling
NASA Technical Reports Server (NTRS)
Schwindt, Paul; Littlefield, Alan
2008-01-01
A modification has been made in the mating halves of a cam-and-groove hose coupling to prevent rapid separation of the halves in the event that the cam levers are released while the fluid in the hose is pressurized. The need for this modification arises because commercial off-the-shelf cam-and-groove hose-coupling halves do not incorporate safety features to prevent separation in the pressurized state. Especially when the pressurized fluid is compressible (e.g., steam or compressed air), the separated halves can be propelled with considerable energy, causing personal injury and/or property damage. Therefore, one purpose served by the modification is to provide for venting to release compressive energy in a contained and safe manner while preventing personal injury and/or property damage. Another purpose served by the modification, during the process of connecting the coupling halves, is to ensure that the coupling halves are properly aligned before the cam levers can be locked into position.
2009-09-01
suffer the power and complexity requirements of a public key system. 28 In [18], a simulation of the SHA –1 algorithm is performed on a Xilinx FPGA ... 256 bits. Thus, the construction of a hash table would need 2512 independent comparisons. It is known that hash collisions of the SHA –1 algorithm... SHA –1 algorithm for small-core FPGA design. Small-core FPGA design is the process by which a circuit is adapted to use the minimal amount of logic
7 CFR 51.1433 - U.S. Commercial Halves.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false U.S. Commercial Halves. 51.1433 Section 51.1433 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... STANDARDS) United States Standards for Grades of Shelled Pecans Grades § 51.1433 U.S. Commercial Halves. The...
Marginal adaptation of newer root canal sealers to dentin: A SEM study.
Polineni, Swapnika; Bolla, Nagesh; Mandava, Pragna; Vemuri, Sayesh; Mallela, Madhusudana; Gandham, Vijaya Madhuri
2016-01-01
This in vitro study evaluated and compared the marginal adaptation of three newer root canal sealers to root dentin. Thirty freshly extracted human single-rooted teeth with completely formed apices were taken. Teeth were decoronated, and root canals were instrumented. The specimens were randomly divided into three groups (n = 10) based upon the sealer used. Group 1 - teeth were obturated with epoxy resin sealer (MM-Seal). Group 2 - teeth were obturated with mineral trioxide aggregate (MTA) based sealer (MTA Fillapex), Group 3 - teeth were obturated with bioceramic sealer (EndoSequence BC sealer). Later samples were vertically sectioned using hard tissue microtome and marginal adaptation of sealers to root dentin was evaluated under coronal and apical halves using scanning electron microscopy (SEM) and marginal gap values were recorded. The data were statistically analyzed by two-way ANOVA and Tukey's multiple post hoc test. The highest marginal gap was seen in Group 2 (apical-16680.00 nm, coronal-10796 nm) and the lowest marginal gap was observed in Group 1 (apical-599.42 nm, coronal-522.72 nm). Coronal halves showed superior adaptation compared to apical halves in all the groups under SEM. Within the limitations of this study epoxy resin-based MM-Seal showed good marginal adaptation than other materials tested.
Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns
NASA Astrophysics Data System (ADS)
Deiotte, R.; La Valley, R.
2017-10-01
The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.
21 CFR 155.200 - Certain other canned vegetables.
Code of Federal Regulations, 2011 CFR
2011-04-01
... artichoke plant Whole; half or halves or halved; whole hearts; halved hearts; quartered hearts. Asparagus Edible portions of sprouts of the asparagus plant, as follows: 3 and 3/4 in or more of upper end Stalks... water. Asparagus may be canned with added water, asparagus juice, or a mixture of both. For the purposes...
Wang, Wei; Wang, Chunqiu; Zhao, Min
2014-03-01
To ease the burdens on the hospitalization capacity, an emerging swallowable-capsule technology has evolved to serve as a remote gastrointestinal (GI) disease examination technique with the aid of the wireless body sensor network (WBSN). Secure multimedia transmission in such a swallowable-capsule-based WBSN faces critical challenges including energy efficiency and content quality guarantee. In this paper, we propose a joint resource allocation and stream authentication scheme to maintain the best possible video quality while ensuring security and energy efficiency in GI-WBSNs. The contribution of this research is twofold. First, we establish a unique signature-hash (S-H) diversity approach in the authentication domain to optimize video authentication robustness and the authentication bit rate overhead over a wireless channel. Based on the full exploration of S-H authentication diversity, we propose a new two-tier signature-hash (TTSH) stream authentication scheme to improve the video quality by reducing authentication dependence overhead while protecting its integrity. Second, we propose to combine this authentication scheme with a unique S-H oriented unequal resource allocation (URA) scheme to improve the energy-distortion-authentication performance of wireless video delivery in GI-WBSN. Our analysis and simulation results demonstrate that the proposed TTSH with URA scheme achieves considerable gain in both authenticated video quality and energy efficiency.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
NASA Astrophysics Data System (ADS)
Anenberg, S. C.; Talgo, K.; Arunachalam, S.; Dolwick, P.; Jang, C.; West, J. J.
2011-07-01
As a component of fine particulate matter (PM2.5), black carbon (BC) is associated with premature human mortality. BC also affects climate by absorbing solar radiation and reducing planetary albedo. Several studies have examined the climate impacts of BC emissions, but the associated health impacts have been studied less extensively. Here, we examine the surface PM2.5 and premature mortality impacts of halving anthropogenic BC emissions globally and individually from eight world regions and three major economic sectors. We use a global chemical transport model, MOZART-4, to simulate PM2.5 concentrations and a health impact function to calculate premature cardiopulmonary and lung cancer deaths. We estimate that halving global anthropogenic BC emissions reduces outdoor population-weighted average PM2.5 by 542 ng m-3 (1.8 %) and avoids 157 000 (95 % confidence interval, 120 000-194 000) annual premature deaths globally, with the vast majority occurring within the source region. Most of these avoided deaths can be achieved by halving emissions in East Asia (China; 54 %), followed by South Asia (India; 31 %), however South Asian emissions have 50 % greater mortality impacts per unit BC emitted than East Asian emissions. Globally, halving residential, industrial, and transportation emissions contributes 47 %, 35 %, and 15 % to the avoided deaths from halving all anthropogenic BC emissions. These contributions are 1.2, 1.2, and 0.6 times each sector's portion of global BC emissions, owing to the degree of co-location with population globally. We find that reducing BC emissions increases regional SO4 concentrations by up to 28 % of the magnitude of the regional BC concentration reductions, due to reduced absorption of radiation that drives photochemistry. Impacts of residential BC emissions are likely underestimated since indoor PM2.5 exposure is excluded. We estimate ∼8 times more avoided deaths when BC and organic carbon (OC) emissions are halved together, suggesting that these results greatly underestimate the full air pollution-related mortality benefits of BC mitigation strategies which generally decrease both BC and OC. The choice of concentration-response factor and health effect thresholds affects estimated global avoided deaths by as much as 56 % but does not strongly affect the regional distribution. Confidence in our results would be strengthened by reducing uncertainties in emissions, model parameterization of aerosol processes, grid resolution, and PM2.5 concentration-mortality relationships globally.
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi
2015-08-01
Radio Frequency Identification (RFID) based solutions are widely used for providing many healthcare applications include patient monitoring, object traceability, drug administration system and telecare medicine information system (TMIS) etc. In order to reduce malpractices and ensure patient privacy, in 2015, Srivastava et al. proposed a hash based RFID tag authentication protocol in TMIS. Their protocol uses lightweight hash operation and synchronized secret value shared between back-end server and tag, which is more secure and efficient than other related RFID authentication protocols. Unfortunately, in this paper, we demonstrate that Srivastava et al.'s tag authentication protocol has a serious security problem in that an adversary may use the stolen/lost reader to connect to the medical back-end server that store information associated with tagged objects and this privacy damage causing the adversary could reveal medical data obtained from stolen/lost readers in a malicious way. Therefore, we propose a secure and efficient RFID tag authentication protocol to overcome security flaws and improve the system efficiency. Compared with Srivastava et al.'s protocol, the proposed protocol not only inherits the advantages of Srivastava et al.'s authentication protocol for TMIS but also provides better security with high system efficiency.
Community Detection in Sparse Random Networks
2013-08-13
if, (i, j) ∈ E , meaning there is an edge between nodes i, j ∈ V. Note that W is symmetric, and we assume that Wii = 0 for all i. Under the null... Wii = 0.) Our arguments are parallel to those we used under P0, the only difficulty being that Wi is not binomial anymore. Indeed, WSi ∼ Bin(n − 1, p1...Berlin: Springer. Alon, N. and S. Gutner (2010). Balanced families of perfect hash functions and their applications. ACM Trans. Algorithms 6 (3), Art
An efficient (t,n) threshold quantum secret sharing without entanglement
NASA Astrophysics Data System (ADS)
Qin, Huawang; Dai, Yuewei
2016-04-01
An efficient (t,n) threshold quantum secret sharing (QSS) scheme is proposed. In our scheme, the Hash function is used to check the eavesdropping, and no particles need to be published. So the utilization efficiency of the particles is real 100%. No entanglement is used in our scheme. The dealer uses the single particles to encode the secret information, and the participants get the secret through measuring the single particles. Compared to the existing schemes, our scheme is simpler and more efficient.
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
7 CFR 51.1431 - U.S. No. 1 Halves and Pieces.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false U.S. No. 1 Halves and Pieces. 51.1431 Section 51.1431... MARKETING ACT OF 1946 FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Grades § 51.1431 U.S. No. 1 Halves and...
7 CFR 51.1431 - U.S. No. 1 Halves and Pieces.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false U.S. No. 1 Halves and Pieces. 51.1431 Section 51.1431... MARKETING ACT OF 1946 FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Grades § 51.1431 U.S. No. 1 Halves and...
Enabling search over encrypted multimedia databases
NASA Astrophysics Data System (ADS)
Lu, Wenjun; Swaminathan, Ashwin; Varna, Avinash L.; Wu, Min
2009-02-01
Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.
Halving Student Loan Interest Rates Is Unaffordable and Ineffective. WebMemo No. 1308
ERIC Educational Resources Information Center
Riedl, Brian M.
2007-01-01
The House of Representatives will likely vote this week on a proposal to halve the 6.8 percent interest rate on subsidized student loans as part of the new congressional majority's 100-Hour agenda. This document presents six problems with halving student loan interest rates and argues that, rather than providing billions in new federal subsidies,…
Holistic processing of human body postures: evidence from the composite effect.
Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2014-01-01
The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically.
Holistic processing of human body postures: evidence from the composite effect
Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2014-01-01
The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically. PMID:24999337
Usha, Carounanidy; Ramarao, Sathyanarayanan; John, Bindu Meera; Rajesh, Praveen; Swatha, S
2017-01-01
Bonding of composite resin to dentin mandates a wet substrate whereas, enamel should be dry. This may not be easily achievable in intracoronal preparations where enamel and dentin are closely placed to each other. Therefore, Dentin Bonding Agents (DBA) are recommended for enamel and dentinal bonding, where enamel is also left moist. A research question was raised if the "enamel-only" preparations will also benefit from wet enamel bonding and contemporary DBA. The aim of this study was to compare the shear bond strengths of composite resin, bonded to dry and wet enamel using fifth generation DBA (etch and rinse system) containing various solvents such as ethanol/water, acetone and ethanol. The crowns of 120 maxillary premolars were split into buccal and lingual halves. They were randomly allocated into four groups of DBA: Group 1-water/ethanol based, Group 2-acetone based, Group 3-ethanol based, Group 4-universal bonding agent (control group). The buccal halves and lingual halves were bonded using the wet bonding and dry bonding technique respectively. After application of the DBAs and composite resin build up, shear bond strength testing was done. Group 1 (ethanol/water based ESPE 3M, Adper Single Bond) showed highest bond strength of (23.15 MPa) in dry enamel. Group 2 (acetone based Denstply, Prime and Bond NT, showed equal bond strength in wet and dry enamel condition (18.87 MPa and 18.02 MPa respectively). Dry enamel bonding and ethanol/water based etch and rinse DBA can be recommended for "enamel-only" tooth preparations.
2009-12-01
other services for early UNIX systems at Bell labs. In many UNIX based systems, the field added to ‘etc/ passwd ’ file to carry GCOS ID information was...charset, and external. struct options_main { /* Option flags */ opt_flags flags; /* Password files */ struct list_main * passwd ; /* Password file...object PASSWD . It is part of several other data structures. struct PASSWD { int id; char *login; char *passwd_hash; int UID
Using global unique identifiers to link autism collections.
Johnson, Stephen B; Whitney, Glen; McAuliffe, Matthew; Wang, Hailong; McCreedy, Evan; Rozenblit, Leon; Evans, Clark C
2010-01-01
To propose a centralized method for generating global unique identifiers to link collections of research data and specimens. The work is a collaboration between the Simons Foundation Autism Research Initiative and the National Database for Autism Research. The system is implemented as a web service: an investigator inputs identifying information about a participant into a client application and sends encrypted information to a server application, which returns a generated global unique identifier. The authors evaluated the system using a volume test of one million simulated individuals and a field test on 2000 families (over 8000 individual participants) in an autism study. Inverse probability of hash codes; rate of false identity of two individuals; rate of false split of single individual; percentage of subjects for which identifying information could be collected; percentage of hash codes generated successfully. Large-volume simulation generated no false splits or false identity. Field testing in the Simons Foundation Autism Research Initiative Simplex Collection produced identifiers for 96% of children in the study and 77% of parents. On average, four out of five hash codes per subject were generated perfectly (only one perfect hash is required for subsequent matching). The system must achieve balance among the competing goals of distinguishing individuals, collecting accurate information for matching, and protecting confidentiality. Considerable effort is required to obtain approval from institutional review boards, obtain consent from participants, and to achieve compliance from sites during a multicenter study. Generic unique identifiers have the potential to link collections of research data, augment the amount and types of data available for individuals, support detection of overlap between collections, and facilitate replication of research findings.
Twitter K-H networks in action: Advancing biomedical literature for drug search.
Hamed, Ahmed Abdeen; Wu, Xindong; Erickson, Robert; Fandy, Tamer
2015-08-01
The importance of searching biomedical literature for drug interaction and side-effects is apparent. Current digital libraries (e.g., PubMed) suffer infrequent tagging and metadata annotation updates. Such limitations cause absence of linking literature to new scientific evidence. This demonstrates a great deal of challenges that stand in the way of scientists when searching biomedical repositories. In this paper, we present a network mining approach that provides a bridge for linking and searching drug-related literature. Our contributions here are two fold: (1) an efficient algorithm called HashPairMiner to address the run-time complexity issues demonstrated in its predecessor algorithm: HashnetMiner, and (2) a database of discoveries hosted on the web to facilitate literature search using the results produced by HashPairMiner. Though the K-H network model and the HashPairMiner algorithm are fairly young, their outcome is evidence of the considerable promise they offer to the biomedical science community in general and the drug research community in particular. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Ya-ping; Ji, Wei-xiao; Zhang, Chang-wen; Li, Ping; Li, Feng; Ren, Miao-juan; Chen, Xin-Lian; Yuan, Min; Wang, Pei-ji
2016-01-01
Discovery of two-dimensional (2D) topological insulator such as group-V films initiates challenges in exploring exotic quantum states in low dimensions. Here, we perform first-principles calculations to study the geometric and electronic properties in 2D arsenene monolayer with hydrogenation (HAsH). We predict a new σ-type Dirac cone related to the px,y orbitals of As atoms in HAsH, dependent on in-plane tensile strain. Noticeably, the spin-orbit coupling (SOC) opens a quantum spin Hall (QSH) gap of 193 meV at the Dirac cone. A single pair of topologically protected helical edge states is established for the edges, and its QSH phase is confirmed with topological invariant Z2 = 1. We also propose a 2D quantum well (QW) encapsulating HAsH with the h-BN sheet on each side, which harbors a nontrivial QSH state with the Dirac cone lying within the band gap of cladding BN substrate. These findings provide a promising innovative platform for QSH device design and fabrication operating at room temperature. PMID:26839209
NASA Astrophysics Data System (ADS)
Wang, Ya-Ping; Ji, Wei-Xiao; Zhang, Chang-Wen; Li, Ping; Li, Feng; Ren, Miao-Juan; Chen, Xin-Lian; Yuan, Min; Wang, Pei-Ji
2016-02-01
Discovery of two-dimensional (2D) topological insulator such as group-V films initiates challenges in exploring exotic quantum states in low dimensions. Here, we perform first-principles calculations to study the geometric and electronic properties in 2D arsenene monolayer with hydrogenation (HAsH). We predict a new σ-type Dirac cone related to the px,y orbitals of As atoms in HAsH, dependent on in-plane tensile strain. Noticeably, the spin-orbit coupling (SOC) opens a quantum spin Hall (QSH) gap of 193 meV at the Dirac cone. A single pair of topologically protected helical edge states is established for the edges, and its QSH phase is confirmed with topological invariant Z2 = 1. We also propose a 2D quantum well (QW) encapsulating HAsH with the h-BN sheet on each side, which harbors a nontrivial QSH state with the Dirac cone lying within the band gap of cladding BN substrate. These findings provide a promising innovative platform for QSH device design and fabrication operating at room temperature.
InChIKey collision resistance: an experimental testing
2012-01-01
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications. We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body. From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations. PMID:23256896
InChIKey collision resistance: an experimental testing.
Pletnev, Igor; Erin, Andrey; McNaught, Alan; Blinov, Kirill; Tchekhovskoi, Dmitrii; Heller, Steve
2012-12-20
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications.We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body.From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations.
Adamczak, Rafal; Meller, Jarek
2016-12-28
Advances in computing have enabled current protein and RNA structure prediction and molecular simulation methods to dramatically increase their sampling of conformational spaces. The quickly growing number of experimentally resolved structures, and databases such as the Protein Data Bank, also implies large scale structural similarity analyses to retrieve and classify macromolecular data. Consequently, the computational cost of structure comparison and clustering for large sets of macromolecular structures has become a bottleneck that necessitates further algorithmic improvements and development of efficient software solutions. uQlust is a versatile and easy-to-use tool for ultrafast ranking and clustering of macromolecular structures. uQlust makes use of structural profiles of proteins and nucleic acids, while combining a linear-time algorithm for implicit comparison of all pairs of models with profile hashing to enable efficient clustering of large data sets with a low memory footprint. In addition to ranking and clustering of large sets of models of the same protein or RNA molecule, uQlust can also be used in conjunction with fragment-based profiles in order to cluster structures of arbitrary length. For example, hierarchical clustering of the entire PDB using profile hashing can be performed on a typical laptop, thus opening an avenue for structural explorations previously limited to dedicated resources. The uQlust package is freely available under the GNU General Public License at https://github.com/uQlust . uQlust represents a drastic reduction in the computational complexity and memory requirements with respect to existing clustering and model quality assessment methods for macromolecular structure analysis, while yielding results on par with traditional approaches for both proteins and RNAs.
Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.
Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M
2017-05-15
We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.
NASA Astrophysics Data System (ADS)
Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.
2017-03-01
Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.
Components in Hemispheric Lateralization.
ERIC Educational Resources Information Center
Lynes, Sharon C. S. L.; And Others
The fact that there is an imperfect correlation between the asymmetrical function of the two halves of the brain and handedness has been a source of puzzlement for many investigators. Many theories have been proposed to explain handedness and why handedness does not correlate perfectly with other measures of lateralization. To assess the…
2009-03-01
policy, elliptic curve public key cryptography using the 256 -bit prime modulus elliptic curve as specified in FIPS-186-2 and SHA - 256 are appropriate for...publications/fips/fips186-2/fips186-2-change1.pdf 76 I P ART I . CH A PT E R 5 Hashing via the Secure Hash Algorithm (using SHA - 256 and...lithography and processing techniques. Field programmable gate arrays ( FPGAs ) are a chip design of interest. These devices are extensively used in
NASA Astrophysics Data System (ADS)
Argyropoulos, Theodoros; Catalan-Lasheras, Nuria; Grudiev, Alexej; Mcmonagle, Gerard; Rodriguez-Castro, Enrique; Syrachev, Igor; Wegner, Rolf; Woolley, Ben; Wuensch, Walter; Zha, Hao; Dolgashev, Valery; Bowden, Gorden; Haase, Andrew; Lucas, Thomas Geoffrey; Volpi, Matteo; Esperante-Pereira, Daniel; Rajamäki, Robin
2018-06-01
A prototype 11.994 GHz, traveling-wave accelerating structure for the Compact Linear Collider has been built, using the novel technique of assembling the structure from milled halves. The use of milled halves has many advantages when compared to a structure made from individual disks. These include the potential for a reduction in cost, because there are fewer parts, as well as a greater freedom in choice of joining technology because there are no rf currents across the halves' joint. Here we present the rf design and fabrication of the prototype structure, followed by the results of the high-power test and post-test surface analysis. During high-power testing the structure reached an unloaded gradient of 100 MV /m at a rf breakdown rate of less than 1.5 ×10-5 breakdowns /pulse /m with a 200 ns pulse. This structure has been designed for the CLIC testing program but construction from halves can be advantageous in a wide variety of applications.
Data Recovery of Distributed Hash Table with Distributed-to-Distributed Data Copy
NASA Astrophysics Data System (ADS)
Doi, Yusuke; Wakayama, Shirou; Ozaki, Satoshi
To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
A novel and lightweight system to secure wireless medical sensor networks.
He, Daojing; Chan, Sammy; Tang, Shaohua
2014-01-01
Wireless medical sensor networks (MSNs) are a key enabling technology in e-healthcare that allows the data of a patient's vital body parameters to be collected by the wearable or implantable biosensors. However, the security and privacy protection of the collected data is a major unsolved issue, with challenges coming from the stringent resource constraints of MSN devices, and the high demand for both security/privacy and practicality. In this paper, we propose a lightweight and secure system for MSNs. The system employs hash-chain based key updating mechanism and proxy-protected signature technique to achieve efficient secure transmission and fine-grained data access control. Furthermore, we extend the system to provide backward secrecy and privacy preservation. Our system only requires symmetric-key encryption/decryption and hash operations and is thus suitable for the low-power sensor nodes. This paper also reports the experimental results of the proposed system in a network of resource-limited motes and laptop PCs, which show its efficiency in practice. To the best of our knowledge, this is the first secure data transmission and access control system for MSNs until now.
Wu, Zhenyu; Zou, Ming
2014-10-01
An increasing number of users interact, collaborate, and share information through social networks. Unprecedented growth in social networks is generating a significant amount of unstructured social data. From such data, distilling communities where users have common interests and tracking variations of users' interests over time are important research tracks in fields such as opinion mining, trend prediction, and personalized services. However, these tasks are extremely difficult considering the highly dynamic characteristics of the data. Existing community detection methods are time consuming, making it difficult to process data in real time. In this paper, dynamic unstructured data is modeled as a stream. Tag assignments stream clustering (TASC), an incremental scalable community detection method, is proposed based on locality-sensitive hashing. Both tags and latent interactions among users are incorporated in the method. In our experiments, the social dynamic behaviors of users are first analyzed. The proposed TASC method is then compared with state-of-the-art clustering methods such as StreamKmeans and incremental k-clique; results indicate that TASC can detect communities more efficiently and effectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Distributed Kernelized Locality-Sensitive Hashing for Faster Image Based Navigation
2015-03-26
Facebook, Google, and Yahoo !. Current methods for image retrieval become problematic when implemented on image datasets that can easily reach billions of...correlations. Tech industry leaders like Facebook, Google, and Yahoo ! sort and index even larger volumes of “big data” daily. When attempting to process...open source implementation of Google’s MapReduce programming paradigm [13] which has been used for many different things. Using Apache Hadoop, Yahoo
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. PMID:26981584
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment.
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation.
Design and implementation of a privacy preserving electronic health record linkage tool in Chicago
Cashy, John P; Jackson, Kathryn L; Pah, Adam R; Goel, Satyender; Boehnke, Jörn; Humphries, John Eric; Kominers, Scott Duke; Hota, Bala N; Sims, Shannon A; Malin, Bradley A; French, Dustin D; Walunas, Theresa L; Meltzer, David O; Kaleba, Erin O; Jones, Roderick C; Galanter, William L
2015-01-01
Objective To design and implement a tool that creates a secure, privacy preserving linkage of electronic health record (EHR) data across multiple sites in a large metropolitan area in the United States (Chicago, IL), for use in clinical research. Methods The authors developed and distributed a software application that performs standardized data cleaning, preprocessing, and hashing of patient identifiers to remove all protected health information. The application creates seeded hash code combinations of patient identifiers using a Health Insurance Portability and Accountability Act compliant SHA-512 algorithm that minimizes re-identification risk. The authors subsequently linked individual records using a central honest broker with an algorithm that assigns weights to hash combinations in order to generate high specificity matches. Results The software application successfully linked and de-duplicated 7 million records across 6 institutions, resulting in a cohort of 5 million unique records. Using a manually reconciled set of 11 292 patients as a gold standard, the software achieved a sensitivity of 96% and a specificity of 100%, with a majority of the missed matches accounted for by patients with both a missing social security number and last name change. Using 3 disease examples, it is demonstrated that the software can reduce duplication of patient records across sites by as much as 28%. Conclusions Software that standardizes the assignment of a unique seeded hash identifier merged through an agreed upon third-party honest broker can enable large-scale secure linkage of EHR data for epidemiologic and public health research. The software algorithm can improve future epidemiologic research by providing more comprehensive data given that patients may make use of multiple healthcare systems. PMID:26104741
Design and implementation of a privacy preserving electronic health record linkage tool in Chicago.
Kho, Abel N; Cashy, John P; Jackson, Kathryn L; Pah, Adam R; Goel, Satyender; Boehnke, Jörn; Humphries, John Eric; Kominers, Scott Duke; Hota, Bala N; Sims, Shannon A; Malin, Bradley A; French, Dustin D; Walunas, Theresa L; Meltzer, David O; Kaleba, Erin O; Jones, Roderick C; Galanter, William L
2015-09-01
To design and implement a tool that creates a secure, privacy preserving linkage of electronic health record (EHR) data across multiple sites in a large metropolitan area in the United States (Chicago, IL), for use in clinical research. The authors developed and distributed a software application that performs standardized data cleaning, preprocessing, and hashing of patient identifiers to remove all protected health information. The application creates seeded hash code combinations of patient identifiers using a Health Insurance Portability and Accountability Act compliant SHA-512 algorithm that minimizes re-identification risk. The authors subsequently linked individual records using a central honest broker with an algorithm that assigns weights to hash combinations in order to generate high specificity matches. The software application successfully linked and de-duplicated 7 million records across 6 institutions, resulting in a cohort of 5 million unique records. Using a manually reconciled set of 11 292 patients as a gold standard, the software achieved a sensitivity of 96% and a specificity of 100%, with a majority of the missed matches accounted for by patients with both a missing social security number and last name change. Using 3 disease examples, it is demonstrated that the software can reduce duplication of patient records across sites by as much as 28%. Software that standardizes the assignment of a unique seeded hash identifier merged through an agreed upon third-party honest broker can enable large-scale secure linkage of EHR data for epidemiologic and public health research. The software algorithm can improve future epidemiologic research by providing more comprehensive data given that patients may make use of multiple healthcare systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ramarao, Sathyanarayanan; John, Bindu Meera; Rajesh, Praveen; Swatha, S
2017-01-01
Introduction Bonding of composite resin to dentin mandates a wet substrate whereas, enamel should be dry. This may not be easily achievable in intracoronal preparations where enamel and dentin are closely placed to each other. Therefore, Dentin Bonding Agents (DBA) are recommended for enamel and dentinal bonding, where enamel is also left moist. A research question was raised if the “enamel-only” preparations will also benefit from wet enamel bonding and contemporary DBA. Aim The aim of this study was to compare the shear bond strengths of composite resin, bonded to dry and wet enamel using fifth generation DBA (etch and rinse system) containing various solvents such as ethanol/water, acetone and ethanol. Materials and Methods The crowns of 120 maxillary premolars were split into buccal and lingual halves. They were randomly allocated into four groups of DBA: Group 1-water/ethanol based, Group 2-acetone based, Group 3-ethanol based, Group 4-universal bonding agent (control group). The buccal halves and lingual halves were bonded using the wet bonding and dry bonding technique respectively. After application of the DBAs and composite resin build up, shear bond strength testing was done. Results Group 1 (ethanol/water based ESPE 3M, Adper Single Bond) showed highest bond strength of (23.15 MPa) in dry enamel. Group 2 (acetone based Denstply, Prime and Bond NT, showed equal bond strength in wet and dry enamel condition (18.87 MPa and 18.02 MPa respectively). Conclusion Dry enamel bonding and ethanol/water based etch and rinse DBA can be recommended for “enamel-only” tooth preparations. PMID:28274042
Fast implementation of length-adaptive privacy amplification in quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Chun-Mei; Li, Mo; Huang, Jing-Zheng; Patcharapong, Treeviriyanupab; Li, Hong-Wei; Li, Fang-Yi; Wang, Chuan; Yin, Zhen-Qiang; Chen, Wei; Keattisak, Sripimanwat; Han, Zhen-Fu
2014-09-01
Post-processing is indispensable in quantum key distribution (QKD), which is aimed at sharing secret keys between two distant parties. It mainly consists of key reconciliation and privacy amplification, which is used for sharing the same keys and for distilling unconditional secret keys. In this paper, we focus on speeding up the privacy amplification process by choosing a simple multiplicative universal class of hash functions. By constructing an optimal multiplication algorithm based on four basic multiplication algorithms, we give a fast software implementation of length-adaptive privacy amplification. “Length-adaptive” indicates that the implementation of privacy amplification automatically adapts to different lengths of input blocks. When the lengths of the input blocks are 1 Mbit and 10 Mbit, the speed of privacy amplification can be as fast as 14.86 Mbps and 10.88 Mbps, respectively. Thus, it is practical for GHz or even higher repetition frequency QKD systems.
NASA Technical Reports Server (NTRS)
Rood, S. B.; Kaufman, P. B.; Abe, H.; Pharis, R. P.
1987-01-01
[3H]Gibberellin A20 (GA20) of high specific radioactivity (49.9 gigabecquerel per millimole) was applied equilaterally in a ring of microdrops to the internodal pulvinus of shoots of 3-week-old gravistimulated and vertical normal maize (Zea mays L.), and to a pleiogravitropic (prostrate) maize mutant, lazy (la). All plants converted the [3H]GA20 to [3H]GA1- and [3H]GA29-like metabolites as well as to several metabolites with the partitioning and chromatographic behavior of glucosyl conjugates of [3H]GA1, [3H]GA29, and [3H]GA8. The tentative identification of these putative [3H]GA glucosyl conjugates was further supported by the release of the free [3H]GA moiety after cleavage with cellulase. Within 12 hours of the [3H]GA20 feed, there was a significantly higher proportion of total radioactivity in lower than in upper halves of internode and leaf sheath pulvini in gravistimulated normal maize. Further, there was a significantly higher proportion of putative free GA metabolites of [3H]GA20, especially [3H]GA1, in the lower halves of normal maize relative to upper halves. The differential localization of the metabolites between upper and lower halves was not apparent in the pleiogravitropic mutant, la. Endogenous GA-like substances were also examined in gravistimulated maize shoots. Forty-eight hours after gravistimulation of 3-week-old maize seedlings, endogenous free GA-like substances in upper and lower leaf sheath and internode pulvini halves were extracted, chromatographed, and bioassayed using the "Tanginbozu" dwarf rice microdrop assay. Lower halves contained consistently higher total levels of GA-like activity. The qualitative elution profile of GA-like substances differed consistently, upper halves containing principally a GA20-like substance and lower halves containing principally a GA20-like substance and lower halves containing mainly GA1-like and GA19-like substances. Gibberellins A1 (10 nanograms per gram) and A20 (5 nanograms per gram) were identified from these lower leaf sheath pulvini by capillary gas chromatography-selected ion monitoring. Results from all of these experiments are consistent with a role for GAs in the differential shoot growth that follows gravitropism, although the results do not eliminate the possibility that the redistribution of GAs results from the gravitropic response.
The Backscattering Phase Function for a Sphere with a Two-Scale Relief of Rough Surface
NASA Astrophysics Data System (ADS)
Klass, E. V.
2017-12-01
The backscattering of light from spherical surfaces characterized by one and two-scale roughness reliefs has been investigated. The analysis is performed using the three-dimensional Monte-Carlo program POKS-RG (geometrical-optics approximation), which makes it possible to take into account the roughness of objects under study by introducing local geometries of different levels. The geometric module of the program is aimed at describing objects by equations of second-order surfaces. One-scale roughness is set as an ensemble of geometric figures (convex or concave halves of ellipsoids or cones). The two-scale roughness is modeled by convex halves of ellipsoids, with surface containing ellipsoidal pores. It is shown that a spherical surface with one-scale convex inhomogeneities has a flatter backscattering phase function than a surface with concave inhomogeneities (pores). For a sphere with two-scale roughness, the dependence of the backscattering intensity is found to be determined mostly by the lower-level inhomogeneities. The influence of roughness on the dependence of the backscattering from different spatial regions of spherical surface is analyzed.
Password-only authenticated three-party key exchange with provable security in the standard model.
Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.
The role of holistic processing in judgments of facial attractiveness.
Abbas, Zara-Angela; Duchaine, Bradley
2008-01-01
Previous work has demonstrated that facial identity recognition, expression recognition, gender categorisation, and race categorisation rely on a holistic representation. Here we examine whether a holistic representation is also used for judgments of facial attractiveness. Like past studies, we used the composite paradigm to assess holistic processing (Young et al 1987, Perception 16 747-759). Experiment 1 showed that top halves of upright faces are judged to be more attractive when aligned with an attractive bottom half than when aligned with an unattractive bottom half. To assess whether this effect resulted from holistic processing or more general effects, we examined the impact of the attractive and unattractive bottom halves when upright halves were misaligned and when aligned and misaligned halves were presented upside-down. The bottom halves had no effect in either condition. These results demonstrate that the perceptual processes underlying upright facial-attractiveness judgments represent the face holistically. Our findings with attractiveness judgments and previous demonstrations involving other aspects of face processing suggest that a common holistic representation is used for most types of face processing.
Graph Coarsening for Path Finding in Cybersecurity Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Johnson, John R.; Halappanavar, Mahantesh
2013-01-01
n the pass-the-hash attack, hackers repeatedly steal password hashes and move through a computer network with the goal of reaching a computer with high level administrative privileges. In this paper we apply graph coarsening in network graphs for the purpose of detecting hackers using this attack or assessing the risk level of the network's current state. We repeatedly take graph minors, which preserve the existence of paths in the graph, and take powers of the adjacency matrix to count the paths. This allows us to detect the existence of paths as well as find paths that have high risk ofmore » being used by adversaries.« less
SHAMROCK: A Synthesizable High Assurance Cryptography and Key Management Coprocessor
2016-11-01
and excluding devices from a communicating group as they become trusted, or untrusted. An example of using rekeying to dynamically adjust group...algorithms, such as the Elliptic Curve Digital Signature Algorithm (ECDSA), work by computing a cryptographic hash of a message using, for example , the...material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721- 05-C
Ahn, Jinhi; Beharry, Seelochan; Molday, Laurie L; Molday, Robert S
2003-10-10
ABCR, also known as ABCA4, is a member of the superfamily of ATP binding cassette transporters that is believed to transport retinal or retinylidene-phosphatidylethanolamine across photoreceptor disk membranes. Mutations in the ABCR gene are responsible for Stargardt macular dystrophy and related retinal dystrophies that cause severe loss in vision. ABCR consists of two tandemly arranged halves each containing a membrane spanning segment followed by a large extracellular/lumen domain, a multi-spanning membrane domain, and a nucleotide binding domain (NBD). To define the role of each NBD, we examined the nucleotide binding and ATPase activities of the N and C halves of ABCR individually and co-expressed in COS-1 cells and derived from trypsin-cleaved ABCR in disk membranes. When disk membranes or membranes from co-transfected cells were photoaffinity labeled with 8-azido-ATP and 8-azido-ADP, only the NBD2 in the C-half bound and trapped the nucleotide. Co-expressed half-molecules displayed basal and retinal-stimulated ATPase activity similar to full-length ABCR. The individually expressed N-half displayed weak 8-azido-ATP labeling and low basal ATPase activity that was not stimulated by retinal, whereas the C-half did not bind ATP and exhibited little if any ATPase activity. Purified ABCR contained one tightly bound ADP, presumably in NBD1. Our results indicate that only NBD2 of ABCR binds and hydrolyzes ATP in the presence or absence of retinal. NBD1, containing a bound ADP, associates with NBD2 to play a crucial, non-catalytic role in ABCR function.
Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian
2015-06-01
The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.
OCO-2 Fairing Bi-Sector Halves Transport
2014-03-24
VANDENBERG AIR FORCE BASE, Calif. – Both halves of the fairing for NASA's Orbiting Carbon Observatory-2 mission, or OCO-2, arrive at Space Launch Complex 2 on Vandenberg Air Force Base in California. Operations have begun to hoist the sections of the fairing into the Delta II launcher's environmental enclosure, or clean room, at the top of the pad's tower. The fairing will protect OCO-2 during launch aboard a United Launch Alliance Delta II rocket from Space Launch Complex 2 in July. The observatory will collect precise global measurements of carbon dioxide in the Earth's atmosphere and provide scientists with a better idea of the chemical compound's impacts on climate change. Scientists will analyze this data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important atmospheric gas. To learn more about OCO-2, visit http://oco.jpl.nasa.gov. Photo credit: NASA/Randy Beaudoin
OCO-2: Hoisting the Fairing Halves up the MST
2014-03-24
VANDENBERG AIR FORCE BASE, Calif. – Both halves of the fairing for NASA's Orbiting Carbon Observatory-2 mission, or OCO-2, are moved into position in the environmental enclosure, or clean room, at the top of the Delta II launcher at Space Launch Complex 2 on Vandenberg Air Force Base in California. The fairing will protect OCO-2 during launch aboard a United Launch Alliance Delta II rocket from Space Launch Complex 2 in July. The observatory will collect precise global measurements of carbon dioxide in the Earth's atmosphere and provide scientists with a better idea of the chemical compound's impacts on climate change. Scientists will analyze this data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important atmospheric gas. To learn more about OCO-2, visit http://oco.jpl.nasa.gov. Photo credit: NASA/Randy Beaudoin
OCO-2: Hoisting the Fairing Halves up the MST
2014-03-24
VANDENBERG AIR FORCE BASE, Calif. – Both halves of the fairing for NASA's Orbiting Carbon Observatory-2 mission, or OCO-2, have arrived in the environmental enclosure, or clean room, at the top of the Delta II launcher at Space Launch Complex 2 on Vandenberg Air Force Base in California. The fairing will protect OCO-2 during launch aboard a United Launch Alliance Delta II rocket from Space Launch Complex 2 in July. The observatory will collect precise global measurements of carbon dioxide in the Earth's atmosphere and provide scientists with a better idea of the chemical compound's impacts on climate change. Scientists will analyze this data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important atmospheric gas. To learn more about OCO-2, visit http://oco.jpl.nasa.gov. Photo credit: NASA/Randy Beaudoin
OCO-2 Fairing Bi-Sector Halves Transport
2014-03-24
VANDENBERG AIR FORCE BASE, Calif. – Both halves of the fairing for NASA's Orbiting Carbon Observatory-2 mission, or OCO-2, are delivered to Space Launch Complex 2 on Vandenberg Air Force Base in California. Operations have begun to hoist the sections of the fairing into the Delta II launcher's environmental enclosure, or clean room, at the top of the pad's tower. The fairing will protect OCO-2 during launch aboard a United Launch Alliance Delta II rocket from Space Launch Complex 2 in July. The observatory will collect precise global measurements of carbon dioxide in the Earth's atmosphere and provide scientists with a better idea of the chemical compound's impacts on climate change. Scientists will analyze this data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important atmospheric gas. To learn more about OCO-2, visit http://oco.jpl.nasa.gov. Photo credit: NASA/Randy Beaudoin
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
A high-speed drug interaction search system for ease of use in the clinical environment.
Takada, Masahiro; Inada, Hiroshi; Nakazawa, Kazuo; Tani, Shoko; Iwata, Michiaki; Sugimoto, Yoshihisa; Nagata, Satoru
2012-12-01
With the advancement of pharmaceutical development, drug interactions have become increasingly complex. As a result, a computer-based drug interaction search system is required to organize the whole of drug interaction data. To overcome problems faced with the existing systems, we developed a drug interaction search system using a hash table, which offers higher processing speeds and easier maintenance operations compared with relational databases (RDB). In order to compare the performance of our system and MySQL RDB in terms of search speed, drug interaction searches were repeated for all 45 possible combinations of two out of a group of 10 drugs for two cases: 5,604 and 56,040 drug interaction data. As the principal result, our system was able to process the search approximately 19 times faster than the system using the MySQL RDB. Our system also has several other merits such as that drug interaction data can be created in comma-separated value (CSV) format, thereby facilitating data maintenance. Although our system uses the well-known method of a hash table, it is expected to resolve problems common to existing systems and to be an effective system that enables the safe management of drugs.
The Hong Kong/AAO/Strasbourg Hα (HASH) Planetary Nebula Database
NASA Astrophysics Data System (ADS)
Bojičić, Ivan S.; Parker, Quentin A.; Frew, David J.
2017-10-01
The Hong Kong/AAO/Strasbourg Hα (HASH) planetary nebula database is an online research platform providing free and easy access to the largest and most comprehensive catalogue of known Galactic PNe and a repository of observational data (imaging and spectroscopy) for these and related astronomical objects. The main motivation for creating this system is resolving some of long standing problems in the field e.g. problems with mimics and dubious and/or misidentifications, errors in observational data and consolidation of the widely scattered data-sets. This facility allows researchers quick and easy access to the archived and new observational data and creating and sharing of non-redundant PN samples and catalogues.
Performance-Oriented Privacy-Preserving Data Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pon, R K; Critchlow, T
2004-09-15
Current solutions to integrating private data with public data have provided useful privacy metrics, such as relative information gain, that can be used to evaluate alternative approaches. Unfortunately, they have not addressed critical performance issues, especially when the public database is very large. The use of hashes and noise yields better performance than existing techniques while still making it difficult for unauthorized entities to distinguish which data items truly exist in the private database. As we show here, leveraging the uncertainty introduced by collisions caused by hashing and the injection of noise, we present a technique for performing a relationalmore » join operation between a massive public table and a relatively smaller private one.« less
Using Purpose-Built Functions and Block Hashes to Enable Small Block and Sub-file Forensics
2010-01-01
JPEGs. We tested precarve using the nps-2009-canon2-gen6 (Garfinkel et al., 2009) disk image. The disk image was created with a 32 MB SD card and a...analysis of n-grams in the fragment. Fig. 1 e Usage of a 160 GB iPod reported by iTunes 8.2.1 (6) (top), as reported by the file system (bottom center), and...as computing with random sampling (bottom right). Note that iTunes usage actually in GiB, even though the program displays the “GB” label. Fig. 2 e
2015-01-01
rush”, nitrous oxide ("laughing gas"), amyl or butyl nitrate ("poppers"). Cannabis: marijuana , hashish ("hash"), THC, "pot", "grass", "weed...have you used marijuana ? _______________ Have you ever used marijuana at other times in your life? YES NO If YES, at what age did you begin...smoking marijuana ? ______________ On approximately how many occasions have you used marijuana ? __________ Do you use any other street drugs currently
2015-09-01
Extremely Lightweight Intrusion Detection (ELIDe) algorithm on an Android -based mobile device. Our results show that the hashing and inner product...approximately 2.5 megabits per second (assuming a normal distribution of packet sizes) with no significant packet loss. 15. SUBJECT TERMS ELIDe, Android , pcap...system (OS). To run ELIDe, the current version was ported for use on Android .4 2.1 Mobile Device After ELIDe was ported to the Android mobile
Anatomy of a hash-based long read sequence mapping algorithm for next generation DNA sequencing.
Misra, Sanchit; Agrawal, Ankit; Liao, Wei-keng; Choudhary, Alok
2011-01-15
Recently, a number of programs have been proposed for mapping short reads to a reference genome. Many of them are heavily optimized for short-read mapping and hence are very efficient for shorter queries, but that makes them inefficient or not applicable for reads longer than 200 bp. However, many sequencers are already generating longer reads and more are expected to follow. For long read sequence mapping, there are limited options; BLAT, SSAHA2, FANGS and BWA-SW are among the popular ones. However, resequencing and personalized medicine need much faster software to map these long sequencing reads to a reference genome to identify SNPs or rare transcripts. We present AGILE (AliGnIng Long rEads), a hash table based high-throughput sequence mapping algorithm for longer 454 reads that uses diagonal multiple seed-match criteria, customized q-gram filtering and a dynamic incremental search approach among other heuristics to optimize every step of the mapping process. In our experiments, we observe that AGILE is more accurate than BLAT, and comparable to BWA-SW and SSAHA2. For practical error rates (< 5%) and read lengths (200-1000 bp), AGILE is significantly faster than BLAT, SSAHA2 and BWA-SW. Even for the other cases, AGILE is comparable to BWA-SW and several times faster than BLAT and SSAHA2. http://www.ece.northwestern.edu/~smi539/agile.html.
A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases
Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting
2014-01-01
In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028
A hybrid spatio-temporal data indexing method for trajectory databases.
Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting
2014-07-21
In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.
Securizing data linkage in french public statistics.
Guesdon, Maxence; Benzenine, Eric; Gadouche, Kamel; Quantin, Catherine
2016-10-06
Administrative records in France, especially medical and social records, have huge potential for statistical studies. The NIR (a national identifier) is widely used in medico-social administrations, and this would theoretically provide considerable scope for data matching, on condition that the legislation on such matters was respected.The law, however, forbids the processing of non-anonymized medical data, thus making it difficult to carry out studies that require several sources of social and medical data.We would like to benefit from computer techniques introduced since the 70 s to provide safe linkage of anonymized files, to release the current constraints of such procedures.We propose an organization and a data workflow, based on hashing and cyrptographic techniques, to strongly compartmentalize identifying and not-identifying data.The proposed method offers a strong control over who is in possession of which information, using different hashing keys for each linkage. This allows to prevent unauthorized linkage of data, to protect anonymity, by preventing cumulation of not-identifying data which can become identifying data when linked.Our proposal would make it possible to conduct such studies more easily, more regularly and more precisely while preserving a high enough level of anonymity.The main obstacle to setting up such a system, in our opinion, is not technical, but rather organizational in that it is based on the existence of a Key-Management Authority.
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
Comparative genomics meets topology: a novel view on genome median and halving problems.
Alexeev, Nikita; Avdeyev, Pavel; Alekseyev, Max A
2016-11-11
Genome median and genome halving are combinatorial optimization problems that aim at reconstruction of ancestral genomes by minimizing the number of evolutionary events between them and genomes of the extant species. While these problems have been widely studied in past decades, their solutions are often either not efficient or not biologically adequate. These shortcomings have been recently addressed by restricting the problems solution space. We show that the restricted variants of genome median and halving problems are, in fact, closely related. We demonstrate that these problems have a neat topological interpretation in terms of embedded graphs and polygon gluings. We illustrate how such interpretation can lead to solutions to these problems in particular cases. This study provides an unexpected link between comparative genomics and topology, and demonstrates advantages of solving genome median and halving problems within the topological framework.
NASA Astrophysics Data System (ADS)
Anenberg, S. C.; Talgo, K.; Arunachalam, S.; Dolwick, P.; Jang, C.; West, J. J.
2011-04-01
As a component of fine particulate matter (PM2.5), black carbon (BC) is associated with premature human mortality. BC also affects climate by absorbing solar radiation and reducing planetary albedo. Several studies have examined the climate impacts of BC emissions, but the associated health impacts have been studied less extensively. Here, we examine the surface PM2.5 and premature mortality impacts of halving anthropogenic BC emissions globally, from eight world regions, and from three major economic sectors. We use a global chemical transport model, MOZART-4, to simulate PM2.5 concentrations and a health impact function to calculate premature cardiopulmonary and lung cancer deaths. We estimate that halving global anthropogenic BC emissions reduces outdoor population-weighted average PM2.5 by 542 ng m-3 (1.8%) and avoids 157 000 (95% confidence interval, 120 000-194 000) annual premature deaths globally, with the vast majority occurring within the source region. While most of these avoided deaths can be achieved by halving East Asian emissions (54%), followed by South Asian emissions (31%), South Asian emissions have 50% greater mortality impacts per unit BC emitted than East Asian emissions. Globally, the contribution of residential, industrial, and transportation BC emissions to PM2.5-related mortality is 1.3, 1.2, and 0.6 times each sector's contribution to anthropogenic BC emissions, owing to the degree of co-location with population. Impacts of residential BC emissions are underestimated since indoor PM2.5 exposure is excluded. We estimate ~8 times more avoided deaths when BC and organic carbon (OC) emissions are halved together, suggesting that these results greatly underestimate the full air pollution-related mortality benefits of BC mitigation strategies which generally decrease both BC and OC. Confidence in our results would be strengthened by reducing uncertainties in emissions, model parameterization of aerosol processes, grid resolution, and PM2.5 concentration-mortality relationships globally.
NASA Astrophysics Data System (ADS)
Wai Kuan, Yip; Teoh, Andrew B. J.; Ngo, David C. L.
2006-12-01
We introduce a novel method for secure computation of biometric hash on dynamic hand signatures using BioPhasor mixing and[InlineEquation not available: see fulltext.] discretization. The use of BioPhasor as the mixing process provides a one-way transformation that precludes exact recovery of the biometric vector from compromised hashes and stolen tokens. In addition, our user-specific[InlineEquation not available: see fulltext.] discretization acts both as an error correction step as well as a real-to-binary space converter. We also propose a new method of extracting compressed representation of dynamic hand signatures using discrete wavelet transform (DWT) and discrete fourier transform (DFT). Without the conventional use of dynamic time warping, the proposed method avoids storage of user's hand signature template. This is an important consideration for protecting the privacy of the biometric owner. Our results show that the proposed method could produce stable and distinguishable bit strings with equal error rates (EERs) of[InlineEquation not available: see fulltext.] and[InlineEquation not available: see fulltext.] for random and skilled forgeries for stolen token (worst case) scenario, and[InlineEquation not available: see fulltext.] for both forgeries in the genuine token (optimal) scenario.
Smeed's law and expected road fatality reduction: An assessment of the Italian case.
Persia, Luca; Gigli, Roberto; Usami, Davide Shingo
2015-12-01
Smeed's law defines the functional relationship existing between the fatality rate and the motorization rate.While focusing on the Italian case and based on the Smeed's law, the study assesses the possibility for Italy of reaching the target of halving the number of road fatalities by 2020, in light of the evolving socioeconomic situation. A Smeed's model has been calibrated based on the recorded Italian data. The evolution of the two indicators, fatality and motorization rates, has been estimated using the predictions of the main parameters (population, fleet size and fatalities). Those trends have been compared with the natural decreasing trend derived from the Smeed's law. Nine scenarios have been developed showing the relationship between the fatality rate and the motorization rate. In case of a limited increase (logistic regression) of the vehicle fleet and according to the estimated evolution of the population, the path defined by motorization and fatality rate is very steep, diverging from the estimated confidence interval of the Smeed's model. In these scenarios the motorization rate is almost constant during the decade. In the actual economic context, a limited development of the vehicle fleet is more plausible. In these conditions the target achievement of halving the number of fatalities in Italy may occur only in case of a structural break (i.e., the introduction of highly effective road safety policies). Practical application: The proposed tools can be used both to evaluate retrospectively the effectiveness of road safety improvements and to assess if a relevant effort is needed to reach the established road safety targets.
NASA Technical Reports Server (NTRS)
Glushko, Vladimir
2004-01-01
Intensity and amplitude of human functional systems and human most important organs are wavelike, rhythmic by nature. These waves have constant periodicity, phase and amplitude. The mentioned characteristics can vary, however their variations have a pronounced reiteration in the course of time. This indicates a hashing of several wave processes and their interference. Stochastic changes in wave processes characteristics of a human organism are explained either by 'pulsations' associated with hashing (superposition) of several wave processes and their interference, or by single influence of environmental physical factors on a human organism. Human beings have respectively periods of higher and lower efficiency, state of health and so on, depending not only of environmental factors, but also of 'internal' rhythmic factor. Sometimes peaks and falls periodicity of some or other characteristics is broken. Disturbance of steady-state biological rhythms is usually accompanied by reduction of activity steadiness of the most important systems of a human organism. In its turn this has an effect on organism's adaptation to changing living conditions as well as on general condition and efficiency of a human being. The latter factor is very important for space medicine. Biological rhythmology is a special branch of biology and medicine, it studies rhythmic activity mechanisms of organs, their systems, individuals and species. Appropriate researches were also carried out in space medicine.
7 CFR 51.2285 - Tolerances for size.
Code of Federal Regulations, 2012 CFR
2012-01-01
... for size Smaller than three-fourths halves Will not pass through 24/64 inch round hole Pass through 24/64 inch round hole Pass through 16/64 inch round hole Pass through 8/64 inch round hole Halves 5 1...
7 CFR 51.2285 - Tolerances for size.
Code of Federal Regulations, 2011 CFR
2011-01-01
... for size Smaller than three-fourths halves Will not pass through 24/64 inch round hole Pass through 24/64 inch round hole Pass through 16/64 inch round hole Pass through 8/64 inch round hole Halves 5 1...
7 CFR 51.2285 - Tolerances for size.
Code of Federal Regulations, 2010 CFR
2010-01-01
... for size Smaller than three-fourths halves Will not pass through 24/64 inch round hole Pass through 24/64 inch round hole Pass through 16/64 inch round hole Pass through 8/64 inch round hole Halves 5 1...
Gibberellins and Gravitropism in Maize Shoots 1
Rood, Stewart B.; Kaufman, Peter B.; Abe, Hiroshi; Pharis, Richard P.
1987-01-01
[3H]Gibberellin A20 (GA20) of high specific radioactivity (49.9 gigabecquerel per millimole) was applied equilaterally in a ring of microdrops to the internodal pulvinus of shoots of 3-week-old gravistimulated and vertical normal maize (Zea mays L.), and to a pleiogravitropic (prostrate) maize mutant, lazy (la). All plants converted the [3H]GA20 to [3H]GA1− and [3H]GA29-like metabolites as well as to several metabolites with the partitioning and chromatographic behavior of glucosyl conjugates of [3H]GA1, [3H]GA29, and [3H]GA8. The tentative identification of these putative [3H]GA glucosyl conjugates was further supported by the release of the free [3H]GA moiety after cleavage with cellulase. Within 12 hours of the [3H]GA20 feed, there was a significantly higher proportion of total radioactivity in lower than in upper halves of internode and leaf sheath pulvini in gravistimulated normal maize. Further, there was a significantly higher proportion of putative free GA metabolites of [3H]GA20, especially [3H]GA1, in the lower halves of normal maize relative to upper halves. The differential localization of the metabolites between upper and lower halves was not apparent in the pleiogravitropic mutant, la. Endogenous GA-like substances were also examined in gravistimulated maize shoots. Forty-eight hours after gravistimulation of 3-week-old maize seedlings, endogenous free GA-like substances in upper and lower leaf sheath and internode pulvini halves were extracted, chromatographed, and bioassayed using the `Tanginbozu' dwarf rice microdrop assay. Lower halves contained consistently higher total levels of GA-like activity. The qualitative elution profile of GA-like substances differed consistently, upper halves containing principally a GA20-like substance and lower halves containing mainly GA1-like and GA19-like substances. Gibberellins A1 (10 nanograms per gram) and A20 (5 nanograms per gram) were identified from these lower leaf sheath pulvini by capillary gas chromatography-selected ion monitoring. Results from all of these experiments are consistent with a role for GAs in the differential shoot growth that follows gravitropism, although the results do not eliminate the possibility that the redistribution of GAs results from the gravitropic response. Images Fig. 1 PMID:11539033
Robust efficient video fingerprinting
NASA Astrophysics Data System (ADS)
Puri, Manika; Lubin, Jeffrey
2009-02-01
We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.
NASA Astrophysics Data System (ADS)
Ye, Tian-Yu
2016-09-01
Recently, Liu et al. proposed a two-party quantum private comparison (QPC) protocol using entanglement swapping of Bell entangled state (Commun. Theor. Phys. 57 (2012) 583). Subsequently Liu et al. pointed out that in Liu et al.'s protocol, the TP can extract the two users' secret inputs without being detected by launching the Bell-basis measurement attack, and suggested the corresponding improvement to mend this loophole (Commun. Theor. Phys. 62 (2014) 210). In this paper, we first point out the information leakage problem toward TP existing in both of the above two protocols, and then suggest the corresponding improvement by using the one-way hash function to encrypt the two users' secret inputs. We further put forward the three-party QPC protocol also based on entanglement swapping of Bell entangled state, and then validate its output correctness and its security in detail. Finally, we generalize the three-party QPC protocol into the multi-party case, which can accomplish arbitrary pair's comparison of equality among K users within one execution. Supported by the National Natural Science Foundation of China under Grant No. 61402407
7 CFR 51.2285 - Tolerances for size.
Code of Federal Regulations, 2013 CFR
2013-01-01
... inch round hole Pass through 24/64 inch round hole Pass through 16/64 inch round hole Pass through 8/64 inch round hole Halves 5 1 (included in 5 percent) Pieces and halves 1 18 3 (included in 18 percent) 1...
7 CFR 51.2285 - Tolerances for size.
Code of Federal Regulations, 2014 CFR
2014-01-01
... inch round hole Pass through 24/64 inch round hole Pass through 16/64 inch round hole Pass through 8/64 inch round hole Halves 5 1 (included in 5 percent) Pieces and halves 1 18 3 (included in 18 percent) 1...
7 CFR 51.1430 - U.S. No. 1 Halves.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Shelled Pecans Grades § 51.1430 U.S. No. 1 Halves. “U.S. No. 1 Halves” consists of pecan half-kernels which meet the following requirements: (a) For quality: (1) Well dried; (2) Fairly...
Churn-Resilient Replication Strategy for Peer-to-Peer Distributed Hash-Tables
NASA Astrophysics Data System (ADS)
Legtchenko, Sergey; Monnet, Sébastien; Sens, Pierre; Muller, Gilles
DHT-based P2P systems provide a fault-tolerant and scalable mean to store data blocks in a fully distributed way. Unfortunately, recent studies have shown that if connection/disconnection frequency is too high, data blocks may be lost. This is true for most current DHT-based system's implementations. To avoid this problem, it is necessary to build really efficient replication and maintenance mechanisms. In this paper, we study the effect of churn on an existing DHT-based P2P system such as DHash or PAST. We then propose solutions to enhance churn tolerance and evaluate them through discrete event simulations.
An Analysis of the Applicability of Federal Law Regarding Hash-Based Searches of Digital Media
2014-06-01
that connect the SD card to the crime. • The second scenario involves a border crossing search by Customs and Border Pro - tection (CBP). In this... marijuana was being grown in the home of Danny Lee Kyllo due to circumstances involving another investiga- tion. Knowing that the indoor growth of marijuana ...requested and was issued a warrant to search the home for drugs. Upon execution of the warrant, more than 100 marijuana plants were found and Kyllo was
Electron dynamics inside a vacuum tube diode through linear differential equations
NASA Astrophysics Data System (ADS)
González, Gabriel; Orozco, Fco. Javier González; Orozco
2014-04-01
In this paper we analyze the motion of charged particles in a vacuum tube diode by solving linear differential equations. Our analysis is based on expressing the volume charge density as a function of the current density and coordinates only, i.e. ρ=ρ(J,z), while in the usual scheme the volume charge density is expressed as a function of the current density and electrostatic potential, i.e. ρ=ρ(J,V). We show that, in the case of slow varying charge density, the space-charge-limited current is reduced up to 50%. Our approach gives the well-known behavior of the classical current density proportional to the three-halves power of the bias potential and inversely proportional to the square of the gap distance between electrodes, and does not require the solution of the nonlinear differential equation normally associated with the Child-Langmuir formulation.
NASA Technical Reports Server (NTRS)
Spiegel, Kirk W.
1990-01-01
Transducer-mounting fixture holds transducer securely against stud. Projects only slightly beyond stud after installation. Flanged transducer fits into fixture when hinged halves open. When halves reclosed, fixture tightened onto threaded stud until stud makes contact with transducer. Knurled area on fixture aids in tightening fixture on stud.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rood, S.B.; Kaufman, P.B.; Abe, H.
1987-03-01
(/sup 3/H)Gibberellin A/sub 20/(GA/sub 20/) of high specific radioactivity was applied equilaterally in a ring of microdrops to the internodal pulvinus of shoots of 3-week-old vertical normal maize (Zea mays L.), and to a pleiogravitropic (prostrate) maize mutant, lazy (la). All plants converted the (/sup 3/H)GA/sub 1//sup -/ and (/sup 3/H)GA/sub 29/-like metabolites as well as to several metabolites with the partitioning and chromatographic behavior of glucosyl conjugates of (/sup 3/H)GA/sub 1/(/sup 3/H)GA/sub 29/, and (/sup 3/H)GA/sub 8/. The tentative identification of these putative (/sup 3/H)GA glucosyl conjugates was further supported by the release of the free (/sup 3/H)GA moietymore » after cleavage with cellulase. Within 12 hours of the (/sup 3/H)GA/sub 20/ feed, there was a significantly higher proportion of total radioactivity in lower than in upper halves of internode and leaf sheaf pulvini in gravistimulated normal maize. Further, there was a significantly higher proportion of putative free GA metabolites of (/sup 3/H)GA/sub 20/, especially (/sup 3/H) GA/sub 1/, in the lower halves of normal maize relative to upper halves. The differential localization of the metabolites between upper and lower halves was not apparent in the pleiogravitropic mutant, la. Endogenous GA-like substances were also examined in gravistimulated maize shoots. Forty-eight hours after gravistimulation of 3-week-old maize seedlings, endogenous free GA-like substances in upper and lower leaf sheath and internode pulvini halves were extracted, chromatographed, and bioassayed using the Tanginbozu dwarf rice microdroassay. Lower halves contained higher total levels of GA-like activity.« less
NASA Astrophysics Data System (ADS)
Chen, Wen-Yuan; Liu, Chen-Chung
2006-01-01
The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.
Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model
Nam, Junghyun; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks. PMID:24977229
Performance profile of NCAA Division I men's basketball games and training sessions.
Conte, D; Tessitore, A; Smiley, K; Thomas, C; Favero, T G
2016-06-01
This study aimed to analyse live and stoppage time phases, their ratio, and action played on half and full court in college basketball games. Differences were assessed for the entire games and between halves. Moreover, differences of the live/stoppage time ratio were analysed between games and game-based conditioning drills. Ten games as well as fifteen defensive, fourteen offensive and six scrimmage-type drills of the same division I men's college team (13 players) were analysed using time-motion analysis technique. Live and stoppage time were classified in five classes of duration: 1-20, 21-40, 41-60, 61-80, >80 seconds. Half court actions started and finished in the same half court. Full court actions were classified as transfer (TR) phases when at least 3 teammates crossed the mid-court line. TR phases were then classified in 5 classes of frequency: 1TR, 2TR, 3TR, 4TR, and >4TR. The results revealed no statistically significant differences between games or between halves for the considered parameters. The only significant difference was observed for live/stoppage time ratio between halves (p<0.001). Furthermore, a significant difference of the live/stoppage ratio was found between games and game-based drills (p<0.01). Post-hoc analysis demonstrated significant differences of scrimmage-type drills in comparison to games, and defensive and offensive drills (p<0.05), whereas no differences emerged for the other pairwise comparisons. The absence of differences between games in the analysed parameters might be important to characterize the model of performance in division I men's college games. Furthermore, these results encourage coaches to use game-based conditioning drills to replicate the LT/ST ratio documented during games.
Lin, Guoqing; Okubo, Paul G.
2016-01-01
We present high-quality focal mechanisms based on a refined earthquake location catalog for the Island of Hawai'i, focusing on Mauna Loa and Kīlauea volcanoes. The relocation catalog is based on first-arrival times and waveform data of both compressional and shear waves for about 180,000 events on and near the Island of Hawai'i between 1986 and 2009 recorded by the seismic stations at the Hawaiian Volcano Observatory. We relocate all the earthquakes by applying ray tracing through an existing three-dimensional velocity model, similar event cluster analysis, and a differential-time relocation method. The resulting location catalog represents an expansion of previous relocation studies, covering a longer time period and consisting of more events with well-constrained absolute locations. The focal mechanisms are obtained based on the compressional-wave first-motion polarities and compressional-to-shear wave amplitude ratios by applying the HASH program to the waveform cross correlation relocated earthquakes. Overall, the good-quality (defined by the HASH parameters) focal solutions are dominated by normal faulting in our study area, especially in the active Ka'ōiki and Hīlea seismic zones. Kīlauea caldera is characterized by a mixture of approximately equal numbers of normal, strike-slip, and reverse faults, whereas its south flank has slightly fewer strike-slip events. Our relocation and focal mechanism results will be useful for mapping the seismic stress and strain fields and for understanding the seismic-volcanic-tectonic relationships within the magmatic systems.
NASA Astrophysics Data System (ADS)
Lin, Guoqing; Okubo, Paul G.
2016-07-01
We present high-quality focal mechanisms based on a refined earthquake location catalog for the Island of Hawai'i, focusing on Mauna Loa and Kīlauea volcanoes. The relocation catalog is based on first-arrival times and waveform data of both compressional and shear waves for about 180,000 events on and near the Island of Hawai'i between 1986 and 2009 recorded by the seismic stations at the Hawaiian Volcano Observatory. We relocate all the earthquakes by applying ray tracing through an existing three-dimensional velocity model, similar event cluster analysis, and a differential-time relocation method. The resulting location catalog represents an expansion of previous relocation studies, covering a longer time period and consisting of more events with well-constrained absolute locations. The focal mechanisms are obtained based on the compressional-wave first-motion polarities and compressional-to-shear wave amplitude ratios by applying the HASH program to the waveform cross correlation relocated earthquakes. Overall, the good-quality (defined by the HASH parameters) focal solutions are dominated by normal faulting in our study area, especially in the active Ka'ōiki and Hīlea seismic zones. Kīlauea caldera is characterized by a mixture of approximately equal numbers of normal, strike-slip, and reverse faults, whereas its south flank has slightly fewer strike-slip events. Our relocation and focal mechanism results will be useful for mapping the seismic stress and strain fields and for understanding the seismic-volcanic-tectonic relationships within the magmatic systems.
Loudness function derives from data on electrical discharge rates in auditory nerve fibers
NASA Technical Reports Server (NTRS)
Howes, W. L.
1973-01-01
Judgements of the loudness of pure-tone sound stimuli yield a loudness function which relates perceived loudness to stimulus amplitude. A loudness function is derived from physical evidence alone without regard to human judgments. The resultant loudness function is L=K(q-q0), where L is loudness, q is effective sound pressure (specifically q0 at the loudness threshold), and K is generally a weak function of the number of stimulated auditory nerve fibers. The predicted function is in agreement with loudness judgment data reported by Warren, which imply that, in the suprathreshold loudness regime, decreasing the sound-pressure level by 6 db results in halving the loudness.
NASA Astrophysics Data System (ADS)
Xu, Ling; Zhao, Zhiwen
2017-08-01
A new quantum protocol with the assistance of a semi-honest third party (TP) is proposed, which allows the participants comparing the equality of their private information without disclosing them. Different from previous protocols, this protocol utilizes quantum key distribution against the collective-dephasing noise and the collective-rotation noise, which is more robust and abandons few samples, to transmit the classical information. In addition, this protocol utilizes the GHZ-like state and the χ + state to produce the entanglement swapping. And the Bell basis and the dual basis are used to measure the particle pair so that 3 bits of each participant's private information can be compared in each comparison time, which is more efficient and consumes fewer comparison times. Meanwhile, there is no need of unitary operation and hash function in this protocol. At the end, various kinds of outside attack and participant attack are discussed and analyzed to be invalid, so it can complete the comparison in security.
Hoefer, Dirk; Hammer, Timo R.
2011-01-01
The progressive public use of antimicrobial clothes has raised issues concerning skin health. A placebo-controlled side-to-side study was run with antimicrobial clothes versus fabrics of similar structure but minus the antimicrobial activity, to evaluate possible adverse effects on the healthy skin microflora. Sixty volunteers were enrolled. Each participant received a set of form-fitting T-shirts constructed in 2 halves: an antibacterial half, displaying activities of 3–5 log-step reductions due to silver-finishes or silver-loaded fibres and a nonantibacterial control side. The microflora of the scapular skin was analyzed weekly for opportunistic and pathogenic microorganisms over six weeks. The antibacterial halves did not disturb the microflora in number or composition, whereas a silver-containing deodorant displayed a short-term disturbance. Furthermore, parameters of skin morphology and function (TEWL, pH, moisture) did not show any significant shifts. In summary, antimicrobial clothes did not show adverse effects on the ecological balance of the healthy skin microflora. PMID:22363849
V3 spinal neurons establish a robust and balanced locomotor rhythm during walking.
Zhang, Ying; Narayan, Sujatha; Geiman, Eric; Lanuza, Guillermo M; Velasquez, Tomoko; Shanks, Bayle; Akay, Turgay; Dyck, Jason; Pearson, Keir; Gosgnach, Simon; Fan, Chen-Ming; Goulding, Martyn
2008-10-09
A robust and well-organized rhythm is a key feature of many neuronal networks, including those that regulate essential behaviors such as circadian rhythmogenesis, breathing, and locomotion. Here we show that excitatory V3-derived neurons are necessary for a robust and organized locomotor rhythm during walking. When V3-mediated neurotransmission is selectively blocked by the expression of the tetanus toxin light chain subunit (TeNT), the regularity and robustness of the locomotor rhythm is severely perturbed. A similar degeneration in the locomotor rhythm occurs when the excitability of V3-derived neurons is reduced acutely by ligand-induced activation of the allatostatin receptor. The V3-derived neurons additionally function to balance the locomotor output between both halves of the spinal cord, thereby ensuring a symmetrical pattern of locomotor activity during walking. We propose that the V3 neurons establish a regular and balanced motor rhythm by distributing excitatory drive between both halves of the spinal cord.
2014-01-30
34poppers"). Cannabis: marijuana , hashish ("hash"), THC, "pot", "grass", "weed", "reefer". Tranquilizers: Quaalude, Seconal ("reds"), Valium...How many times in the past year have you used marijuana ? _______________ Have you ever used marijuana at other times in your life? YES NO...If YES, at what age did you begin smoking marijuana ? ______________ On approximately how many occasions have you used marijuana ? __________ Do
OCO-2 Fairing Bi-Sector Halves Transport
2014-03-24
VANDENBERG AIR FORCE BASE, Calif. – Both halves of the fairing for NASA's Orbiting Carbon Observatory-2 mission, or OCO-2, are towed from the Building 836 hangar to Space Launch Complex 2 on Vandenberg Air Force Base in California. Operations have begun to hoist the sections of the fairing into the Delta II launcher's environmental enclosure, or clean room, at the top of the pad's tower. The fairing will protect OCO-2 during launch aboard a United Launch Alliance Delta II rocket from Space Launch Complex 2 in July. The observatory will collect precise global measurements of carbon dioxide in the Earth's atmosphere and provide scientists with a better idea of the chemical compound's impacts on climate change. Scientists will analyze this data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important atmospheric gas. To learn more about OCO-2, visit http://oco.jpl.nasa.gov. Photo credit: NASA/Randy Beaudoin
Sensor-based laser ablation for tissue specific cutting: an experimental study.
Rupprecht, Stephan; Tangermann-Gerk, Katja; Wiltfang, Joerg; Neukam, Friedrich Wilhelm; Schlegel, Andreas
2004-01-01
The interaction of laser light and tissue causes measurable phenomenons. These phenomenons can be quantified and used to control the laser drilling within a feedback system. Ten halves of dissected minipig jaws were treated with an Er:YAG laser system controlled via a feedback system. Sensor outputs were recorded and analyzed while osteotomy was done. The relative depth of laser ablation was calculated by 3D computed tomography and evaluated histologically. The detected signals caused by the laser-tissue interaction changed their character in a dramatic way after passing the cortical bone layer. The radiological evaluation of 98 laser-ablated holes in the ten halves showed no deeper ablation beyond the cortical layer (mean values: 97.8%). Histologically, no physical damage to the alveolar nerve bundle was proved. The feedback system to control the laser drilling was working exactly for cortical ablation of the bone based on the evaluation of detected and quantified phenomenon related to the laser-tissue interaction.
The past and future of food stocks
NASA Astrophysics Data System (ADS)
Laio, Francesco; Ridolfi, Luca; D'Odorico, Paolo
2016-03-01
Human societies rely on food reserves and the importation of agricultural goods as means to cope with crop failures and associated food shortage. While food trade has been the subject of intensive investigations in recent years, food reserves remain poorly quantified. It is unclear how food stocks are changing and whether they are declining. In this study we use food stock records for 92 products to reconstruct 50 years of aggregated food reserves, expressed in caloric equivalent (kcal), at the regional and global scales. A detailed statistical analysis demonstrates that the overall regional and global per-capita food stocks are stationary, challenging a widespread impression that food reserves are shrinking. We develop a statistically-sound stochastic representation of stock dynamics and take the stock-halving probability as a measure of the natural variability of the process. We find that there is a 20% probability that the global per-capita stocks will be halved by 2050. There are, however, some strong regional differences: Western Europe and the region encompassing North Africa and the Middle East have smaller halving probabilities and smaller per-capita stocks, while North America and Oceania have greater halving probabilities and greater per-capita stocks than the global average. Africa exhibits low per-capita stocks and relatively high probability of stock halving by 2050, which reflects a state of higher food insecurity in this continent.
Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.
2016-01-01
Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017
MOSAIK: a hash-based algorithm for accurate next-generation sequencing short-read mapping.
Lee, Wan-Ping; Stromberg, Michael P; Ward, Alistair; Stewart, Chip; Garrison, Erik P; Marth, Gabor T
2014-01-01
MOSAIK is a stable, sensitive and open-source program for mapping second and third-generation sequencing reads to a reference genome. Uniquely among current mapping tools, MOSAIK can align reads generated by all the major sequencing technologies, including Illumina, Applied Biosystems SOLiD, Roche 454, Ion Torrent and Pacific BioSciences SMRT. Indeed, MOSAIK was the only aligner to provide consistent mappings for all the generated data (sequencing technologies, low-coverage and exome) in the 1000 Genomes Project. To provide highly accurate alignments, MOSAIK employs a hash clustering strategy coupled with the Smith-Waterman algorithm. This method is well-suited to capture mismatches as well as short insertions and deletions. To support the growing interest in larger structural variant (SV) discovery, MOSAIK provides explicit support for handling known-sequence SVs, e.g. mobile element insertions (MEIs) as well as generating outputs tailored to aid in SV discovery. All variant discovery benefits from an accurate description of the read placement confidence. To this end, MOSAIK uses a neural-network based training scheme to provide well-calibrated mapping quality scores, demonstrated by a correlation coefficient between MOSAIK assigned and actual mapping qualities greater than 0.98. In order to ensure that studies of any genome are supported, a training pipeline is provided to ensure optimal mapping quality scores for the genome under investigation. MOSAIK is multi-threaded, open source, and incorporated into our command and pipeline launcher system GKNO (http://gkno.me).
MOSAIK: A Hash-Based Algorithm for Accurate Next-Generation Sequencing Short-Read Mapping
Lee, Wan-Ping; Stromberg, Michael P.; Ward, Alistair; Stewart, Chip; Garrison, Erik P.; Marth, Gabor T.
2014-01-01
MOSAIK is a stable, sensitive and open-source program for mapping second and third-generation sequencing reads to a reference genome. Uniquely among current mapping tools, MOSAIK can align reads generated by all the major sequencing technologies, including Illumina, Applied Biosystems SOLiD, Roche 454, Ion Torrent and Pacific BioSciences SMRT. Indeed, MOSAIK was the only aligner to provide consistent mappings for all the generated data (sequencing technologies, low-coverage and exome) in the 1000 Genomes Project. To provide highly accurate alignments, MOSAIK employs a hash clustering strategy coupled with the Smith-Waterman algorithm. This method is well-suited to capture mismatches as well as short insertions and deletions. To support the growing interest in larger structural variant (SV) discovery, MOSAIK provides explicit support for handling known-sequence SVs, e.g. mobile element insertions (MEIs) as well as generating outputs tailored to aid in SV discovery. All variant discovery benefits from an accurate description of the read placement confidence. To this end, MOSAIK uses a neural-network based training scheme to provide well-calibrated mapping quality scores, demonstrated by a correlation coefficient between MOSAIK assigned and actual mapping qualities greater than 0.98. In order to ensure that studies of any genome are supported, a training pipeline is provided to ensure optimal mapping quality scores for the genome under investigation. MOSAIK is multi-threaded, open source, and incorporated into our command and pipeline launcher system GKNO (http://gkno.me). PMID:24599324
Niu, Zhiqiang; Zhou, Weiya; Chen, Jun; Feng, Guoxing; Li, Hong; Hu, Yongsheng; Ma, Wenjun; Dong, Haibo; Li, Jinzhu; Xie, Sishen
2013-02-25
Ultrathin SWCNT transparent and conductive films on flexible and transparent substrates are prepared via repeatedly halving the directly grown SWCNT films and flexible and transparent supercapacitors with excellent performance were fabricated. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gencrypt: one-way cryptographic hashes to detect overlapping individuals across samples
Turchin, Michael C.; Hirschhorn, Joel N.
2012-01-01
Summary: Meta-analysis across genome-wide association studies is a common approach for discovering genetic associations. However, in some meta-analysis efforts, individual-level data cannot be broadly shared by study investigators due to privacy and Institutional Review Board concerns. In such cases, researchers cannot confirm that each study represents a unique group of people, leading to potentially inflated test statistics and false positives. To resolve this problem, we created a software tool, Gencrypt, which utilizes a security protocol known as one-way cryptographic hashes to allow overlapping participants to be identified without sharing individual-level data. Availability: Gencrypt is freely available under the GNU general public license v3 at http://www.broadinstitute.org/software/gencrypt/ Contact: joelh@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22302573
2012-02-02
VANDENBERG AIR FORCE BASE, Calif. -- Workers unload the two halves that make up the Pegasus XL rocket's fairing that will protect the NuSTAR spacecraft during launch. Inside Orbital Science's processing facility, the fairing halves will be unwrapped and processed in a clean room environmental enclosure. The Pegasus is set to launch NASA's NuSTAR spacecraft. Once the rocket and spacecraft are processed at Vandenberg, they will be flown on the Orbital Sciences’ L-1011 carrier aircraft to the Ronald Reagan Ballistic Missile Defense Test Site at the Pacific Ocean’s Kwajalein Atoll for launch. The high-energy x-ray telescope will conduct a census for black holes, map radioactive material in young supernovae remnants, and study the origins of cosmic rays and the extreme physics around collapsed stars. For more information, visit science.nasa.gov/missions/nustar/. Photo credit: NASA/Randy Beaudoin, VAFB
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
What Is True Halving in the Payoff Matrix of Game Theory?
Hasegawa, Eisuke; Yoshimura, Jin
2016-01-01
In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player’s current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated. PMID:27487194
What Is True Halving in the Payoff Matrix of Game Theory?
Ito, Hiromu; Katsumata, Yuki; Hasegawa, Eisuke; Yoshimura, Jin
2016-01-01
In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player's current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated.
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity
Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272
An improved biometrics-based remote user authentication scheme with user anonymity.
Khan, Muhammad Khurram; Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.
Conjunction Errors in Recognition: Emergent Structure and Metacognitive Control
ERIC Educational Resources Information Center
Verde, Michael F.
2010-01-01
Pictures consisted of component halves from either the same photographic scene (SS) or different scenes (DS). SS pictures possessed the emergent property of forming a continuous scene. In Experiment 1, conjunction lures were created by rearranging halves from studied DS pictures into new DS or SS lures. Despite equally familiar components,…
21 CFR 155.200 - Certain other canned vegetables.
Code of Federal Regulations, 2012 CFR
2012-04-01
... of the sweet pepper plant Whole; halves or halved; pieces; dice or diced; strips; chopped. Red sweet peppers Red-ripe pods of the sweet pepper plant Do. Pimientos or pimentos Red-ripe pods of the pimiento.... (v) Spice. (vi) A vinegar. (vii) Green peppers or red peppers which may be dried. (viii) Mint leaves...
1983-03-01
amounts and or unacceptability by the patrons. Certain food items (Spanish franks, green beans, and lyonnaise potatoes ) had large amounts of kitchen...Greens 12.89 Gravy, Cream 22.42 Beans, Green 12.44 Soup, Vegetable 18.15 Potatoes , Lyonnaise 10.62 Potatoes , Oven-Browned 10.16 Grits, (hominy) 6.51...for canned pears and 7.92% for coffee cake. Quantitatively, the largest amounts of wastes were milk, orange juice, and hash brown potatoes . Although the
Hou, Zhuang; Lin, Bin; Bao, Yu; Yan, Hai-Ning; Zhang, Miao; Chang, Xiao-Wei; Zhang, Xin-Xin; Wang, Zi-Jie; Wei, Gao-Fei; Cheng, Mao-Sheng; Liu, Yang; Guo, Chun
2017-05-26
Dual-tail approach was employed to design novel Carbonic Anhydrase (CA) IX inhibitors by simultaneously matching the hydrophobic and hydrophilic halves of the active site, which also contains a zinc ion as part of the catalytic center. The classic sulfanilamide moiety was used as the zinc binding group. An amino glucosamine fragment was chosen as the hydrophilic part and a cinnamamide fragment as the hydrophobic part in order to draw favorable interactions with the corresponding halves of the active site. In comparison with sulfanilamide which is largely devoid of the hydrophilic and hydrophobic interactions with the two halves of the active site, the compounds so designed and synthesized in this study showed 1000-fold improvement in binding affinity. Most of the compounds inhibited the CA effectively with IC 50 values in the range of 7-152 nM. Compound 14e (IC 50 : 7 nM) was more effective than the reference drug acetazolamide (IC 50 : 30 nM). The results proved that the dual-tail approach to simultaneously matching the hydrophobic and hydrophilic halves of the active site by linking hydrophobic and hydrophilic fragments was useful for designing novel CA inhibitors. The effectiveness of those compounds was elucidated by both the experimental data and molecular docking simulations. This work laid a solid foundation for further development of novel CA IX inhibitors for cancer treatment. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Smith, D P; Cason, J A; Berrang, M E
2005-07-01
The effect of prechill fecal contamination on numbers of bacteria on immersion-chilled carcasses was tested in each of three replicate trials. For each trial, 16 eviscerated broiler carcasses were split into 32 halves and assigned to one of two groups. Cecal contents (0.1 g inoculated with Campylobacter and nalidixic acid-resistant Salmonella) were applied to each of eight halves in one group (direct contamination) that were placed into one paddle chiller (contaminated), whereas the other paired halves were placed into another chiller (control). From the second group of eight split birds, one of each paired half was placed in the contaminated chiller (to determine cross-contamination) and the other half was placed in the control chiller. Postchill carcass halves were sampled by a 1-min rinse in sterile water, which was collected and cultured. Bacterial counts were reported as log CFU per milliliter of rinsate. There were no significant statistical differences (paired t test, P < 0.05) from direct contamination for coliforms (mean 3.0 log CFU) and Escherichia coli (mean 2.7 log CFU), although Campylobacter numbers significantly increased from control values because of direct contamination (1.5 versus 2.1 log CFU), and the incidence increased from 79 to 100%. There was no significant effect of cross-contamination on coliform (mean 2.9 log CFU) or E. coli (mean 2.6 log CFU) numbers. Nevertheless, Campylobacter levels were significantly higher after exposure to cross-contamination (1.6 versus 2.0 log CFU), and the incidence of this bacterium increased from 75 to 100%. Salmonella-positive halves increased from 0 to 42% postchill because of direct contamination and from 0 to 25% as a result of cross-contamination after chilling. Water samples and surface swabs taken postchill from the contaminated chiller were higher for Campylobacter than those taken from the control chiller. Immersion chilling equilibrated bacterial numbers between contaminated and control halves subjected to either direct contamination or cross-contamination for coliforms and E. coli. Campylobacter numbers, Campylobacter incidence, and Salmonella incidence increased because of both direct contamination and cross-contamination in the chiller. Postchill E. coli numbers did not indicate which carcass halves were contaminated with feces before chilling.
Development of biometric DNA ink for authentication security.
Hashiyada, Masaki
2004-10-01
Among the various types of biometric personal identification systems, DNA provides the most reliable personal identification. It is intrinsically digital and unchangeable while the person is alive, and even after his/her death. Increasing the number of DNA loci examined can enhance the power of discrimination. This report describes the development of DNA ink, which contains synthetic DNA mixed with printing inks. Single-stranded DNA fragments encoding a personalized set of short tandem repeats (STR) were synthesized. The sequence was defined as follows. First, a decimal DNA personal identification (DNA-ID) was established based on the number of STRs in the locus. Next, this DNA-ID was encrypted using a binary, 160-bit algorithm, using a hashing function to protect privacy. Since this function is irreversible, no one can recover the original information from the encrypted code. Finally, the bit series generated above is transformed into base sequences, and double-stranded DNA fragments are amplified by the polymerase chain reaction (PCR) to protect against physical attacks. Synthesized DNA was detected successfully after samples printed in DNA ink were subjected to several resistance tests used to assess the stability of printing inks. Endurance test results showed that this DNA ink would be suitable for practical use as a printing ink and was resistant to 40 hours of ultraviolet exposure, performance commensurate with that of photogravure ink. Copyright 2004 Tohoku University Medical Press
Lightweight Sensor Authentication Scheme for Energy Efficiency in Ubiquitous Computing Environments.
Lee, Jaeseung; Sung, Yunsick; Park, Jong Hyuk
2016-12-01
The Internet of Things (IoT) is the intelligent technologies and services that mutually communicate information between humans and devices or between Internet-based devices. In IoT environments, various device information is collected from the user for intelligent technologies and services that control the devices. Recently, wireless sensor networks based on IoT environments are being used in sectors as diverse as medicine, the military, and commerce. Specifically, sensor techniques that collect relevant area data via mini-sensors after distributing smart dust in inaccessible areas like forests or military zones have been embraced as the future of information technology. IoT environments that utilize smart dust are composed of the sensor nodes that detect data using wireless sensors and transmit the detected data to middle nodes. Currently, since the sensors used in these environments are composed of mini-hardware, they have limited memory, processing power, and energy, and a variety of research that aims to make the best use of these limited resources is progressing. This paper proposes a method to utilize these resources while considering energy efficiency, and suggests lightweight mutual verification and key exchange methods based on a hash function that has no restrictions on operation quantity, velocity, and storage space. This study verifies the security and energy efficiency of this method through security analysis and function evaluation, comparing with existing approaches. The proposed method has great value in its applicability as a lightweight security technology for IoT environments.
Lightweight Sensor Authentication Scheme for Energy Efficiency in Ubiquitous Computing Environments
Lee, Jaeseung; Sung, Yunsick; Park, Jong Hyuk
2016-01-01
The Internet of Things (IoT) is the intelligent technologies and services that mutually communicate information between humans and devices or between Internet-based devices. In IoT environments, various device information is collected from the user for intelligent technologies and services that control the devices. Recently, wireless sensor networks based on IoT environments are being used in sectors as diverse as medicine, the military, and commerce. Specifically, sensor techniques that collect relevant area data via mini-sensors after distributing smart dust in inaccessible areas like forests or military zones have been embraced as the future of information technology. IoT environments that utilize smart dust are composed of the sensor nodes that detect data using wireless sensors and transmit the detected data to middle nodes. Currently, since the sensors used in these environments are composed of mini-hardware, they have limited memory, processing power, and energy, and a variety of research that aims to make the best use of these limited resources is progressing. This paper proposes a method to utilize these resources while considering energy efficiency, and suggests lightweight mutual verification and key exchange methods based on a hash function that has no restrictions on operation quantity, velocity, and storage space. This study verifies the security and energy efficiency of this method through security analysis and function evaluation, comparing with existing approaches. The proposed method has great value in its applicability as a lightweight security technology for IoT environments. PMID:27916962
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
A fast exact simulation method for a class of Markov jump processes.
Li, Yao; Hu, Lili
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.
Method and apparatus for preventing cyclotron breakdown in partially evacuated waveguide
Moeller, Charles P.
1987-01-01
Cyclotron breakdown is prevented in a partially evacuated waveguide by providing a section of waveguide having an axial cut therein in order to apply a potential across the two halves of the waveguide. This section is positioned in the waveguide crossing the area of electron cyclotron resonance. The potential applied across the waveguide halves is used to deflect seed electrons into the wall of the waveguide in order to prevent ionization of gas molecules and creation of more electron ion pairs which would result in cyclotron breakdown. Support means is also disclosed for electrically isolating the waveguide halves and transition means is provided between the section of the waveguide with the axial cut and the solid waveguide at either end thereof.
NASA Technical Reports Server (NTRS)
Friedell, M. V. (Inventor)
1978-01-01
A disconnect composed basically of two halves each consisting of a poppet valve operable to isolate fluid with essentially zero fluid loss is presented. The two halves are coupled together by a quickly releasable coupling which may be either a coupling ring tightened or loosened by a twisting motion, or a clamp operated by a pivoted to prevent disconnecting the two halves until both valves are in closed condition. The positive feature of the device is one requiring a valve closing step before a disconnect step, and takes structural form in an accentric lobe mounted on the valve operating stem. If some obstruction prevents the poppet from moving to its seat, the eccentric lobe cannot be rotated to the closed position, and the interlock prevents a disconnect.
THERMALLY SHIELDED MOISTURE REMOVAL DEVICE
Miller, O.E.
1958-08-26
An apparatus is presented for removing moisture from the air within tanks by condensation upon a cartridge containing liquid air. An insulating shell made in two halves covers the cartridge within the evacuated system. The shell halves are hinged together and are operated by a system of levers from outside the tank with the motion translated through a sylphon bellows to cover and uncover the cartridge. When the condensation of moisture is in process, the insulative shell is moved away from the liquid air cartridge, and during that part of the process when there is no freezing out of moisture, the shell halves are closed on the cell so thnt the accumulated frost is not evaporated. This insulating shell greatly reduces the consumption of liquid air in this condensation process.
Hizay, Arzu; Ozsoy, Umut; Demirel, Bahadir Murat; Ozsoy, Ozlem; Angelova, Srebrina K; Ankerne, Janina; Sarikcioglu, Sureyya Bilmen; Dunlop, Sarah A; Angelov, Doychin N; Sarikcioglu, Levent
2012-06-01
Despite increased understanding of peripheral nerve regeneration, functional recovery after surgical repair remains disappointing. A major contributing factor is the extensive collateral branching at the lesion site, which leads to inaccurate axonal navigation and aberrant reinnervation of targets. To determine whether the Y tube reconstruction improved axonal regrowth and whether this was associated with improved function. We used a Y-tube conduit with the aim of improving navigation of regenerating axons after facial nerve transection in rats. Retrograde labeling from the zygomatic and buccal branches showed a halving in the number of double-labeled facial motor neurons (15% vs 8%; P < .05) after Y tube reconstruction compared with facial-facial anastomosis coaptation. However, in both surgical groups, the proportion of polyinnervated motor endplates was similar (≈ 30%; P > .05), and video-based motion analysis of whisking revealed similarly poor function. Although Y-tube reconstruction decreases axonal branching at the lesion site and improves axonal navigation compared with facial-facial anastomosis coaptation, it fails to promote monoinnervation of motor endplates and confers no functional benefit.
Defining the questions: a research agenda for nontraditional authentication in arms control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hauck, Danielle K; Mac Arthur, Duncan W; Smith, Morag K
Many traditional authentication techniques have been based on hardware solutions. Thus authentication of measurement system hardware has been considered in terms of physical inspection and destructive analysis. Software authentication has implied hash function analysis or authentication tools such as Rose. Continuity of knowledge is maintained through TIDs and cameras. Although there is ongoing progress improving all of these authentication methods, there has been little discussion of the human factors involved in authentication. Issues of non-traditional authentication include sleight-of-hand substitutions, monitor perception vs. reality, and visual diversions. Since monitor confidence in a measurement system depends on the product of their confidencesmore » in each authentication element, it is important to investigate all authentication techniques, including the human factors. This paper will present an initial effort to identify the most important problems that traditional authentication approaches in safeguards have not addressed and are especially relevant to arms control verification. This will include a survey of the literature and direct engagement with nontraditional experts in areas like psychology and human factors. Based on the identification of problem areas, potential research areas will be identified and a possible research agenda will be developed.« less
Mishra, Dheerendra; Mukhopadhyay, Sourav; Kumari, Saru; Khan, Muhammad Khurram; Chaturvedi, Ankita
2014-05-01
Telecare medicine information systems (TMIS) present the platform to deliver clinical service door to door. The technological advances in mobile computing are enhancing the quality of healthcare and a user can access these services using its mobile device. However, user and Telecare system communicate via public channels in these online services which increase the security risk. Therefore, it is required to ensure that only authorized user is accessing the system and user is interacting with the correct system. The mutual authentication provides the way to achieve this. Although existing schemes are either vulnerable to attacks or they have higher computational cost while an scalable authentication scheme for mobile devices should be secure and efficient. Recently, Awasthi and Srivastava presented a biometric based authentication scheme for TMIS with nonce. Their scheme only requires the computation of the hash and XOR functions.pagebreak Thus, this scheme fits for TMIS. However, we observe that Awasthi and Srivastava's scheme does not achieve efficient password change phase. Moreover, their scheme does not resist off-line password guessing attack. Further, we propose an improvement of Awasthi and Srivastava's scheme with the aim to remove the drawbacks of their scheme.
NASA Technical Reports Server (NTRS)
Vassout, P.; Parmentier, G.
1978-01-01
The results of the study reveal that with regard to the pulmonary lesions, twice the number of exposures is compensated for by quartering the overpressure of the wave crest. With regard to the mortality rates, it reveals that halving the overpressure of the wave crest is offset by a 20-fold increase in the number of exposures.
Effects of exposure, diet, and thermoregulation on fecal glucocorticoid measures in wild bears.
Stetz, Jeff; Hunt, Kathleen; Kendall, Katherine C; Wasser, Samuel K
2013-01-01
We examined fecal glucocorticoid (fGC) measures of nutrition and thermoregulatory demands on wild bears in Glacier National Park, Montana, and assessed how these measures changed in samples left in the field. Both ambient temperature and exposure can impact thermoregulation and sample degradation. Bear diets vary markedly with season, affecting body condition and thus fGC. We collected fecal samples during September and October, 2001, when ambient temperatures ranged from 30°C to -5°C. We collected half of each sample immediately and left the other half in its original location for 1-28 days. We used generalized linear models (GLM) to first predict fGC concentrations in fresh samples based on proxies of nutrition, ambient temperature, thermal exposure, and precipitation. These same covariates were then used to predict degradation-based differences in fGC concentrations between the paired sample halves. Variation in fGC was predicted by diet, Julian date, aspect, and the interaction between Julian date and aspect in both fresh and exposed samples. Cumulative precipitation was also a significant predictor of fGC concentrations in the exposed samples, independent of time, indicating that precipitation contributes to sample degradation but not enough to mask effects of other environmental factors on fGC concentrations. Differences between sample halves were only predicted by cumulative precipitation and exposure time; cumulative precipitation decreased, whereas exposure time increased, fGC concentrations in the exposed sample halves. Results indicate that fGC can provide reliable indices of nutrition and thermoregulatory demands in bears and that sample degradation impacts on these relations are minimal and can be virtually eliminated by controlling for cumulative precipitation over the estimated exposure times.
Effects of Exposure, Diet, and Thermoregulation on Fecal Glucocorticoid Measures in Wild Bears
Stetz, Jeff; Hunt, Kathleen; Kendall, Katherine C.; Wasser, Samuel K.
2013-01-01
We examined fecal glucocorticoid (fGC) measures of nutrition and thermoregulatory demands on wild bears in Glacier National Park, Montana, and assessed how these measures changed in samples left in the field. Both ambient temperature and exposure can impact thermoregulation and sample degradation. Bear diets vary markedly with season, affecting body condition and thus fGC. We collected fecal samples during September and October, 2001, when ambient temperatures ranged from 30°C to −5°C. We collected half of each sample immediately and left the other half in its original location for 1–28 days. We used generalized linear models (GLM) to first predict fGC concentrations in fresh samples based on proxies of nutrition, ambient temperature, thermal exposure, and precipitation. These same covariates were then used to predict degradation-based differences in fGC concentrations between the paired sample halves. Variation in fGC was predicted by diet, Julian date, aspect, and the interaction between Julian date and aspect in both fresh and exposed samples. Cumulative precipitation was also a significant predictor of fGC concentrations in the exposed samples, independent of time, indicating that precipitation contributes to sample degradation but not enough to mask effects of other environmental factors on fGC concentrations. Differences between sample halves were only predicted by cumulative precipitation and exposure time; cumulative precipitation decreased, whereas exposure time increased, fGC concentrations in the exposed sample halves. Results indicate that fGC can provide reliable indices of nutrition and thermoregulatory demands in bears and that sample degradation impacts on these relations are minimal and can be virtually eliminated by controlling for cumulative precipitation over the estimated exposure times. PMID:23457488
Microtomographic studies of subdivision of modified-release tablets.
Wilczyński, Sławomir; Koprowski, Robert; Duda, Piotr; Banyś, Anna; Błońska-Fajfrowska, Barbara
2016-09-25
The uniformity of dosage units within a certain batch is ensured when each unit contains the active pharmaceutical ingredient (API) within a narrow range around the label claim. For tablets containing a score-line authorised for dose reductions, the European Pharmacopoeia (Ph. Eur.) considers that the uniformity of the tablet parts may be based on weight measurements regardless of the tablet type (immediate or modified release). This is because it is up to the regulatory authorities first to assess whether the tablet may contain a score-line for such use. X-ray microtomography was applied to assess the symmetry of 36 modified release tablets, containing 300mg of theophylline. The sum of the volume and surface area of the pellets in the subdivided tablets were compared. Simulations were carried out to identify the optimal amount of pellets in the tablet mass. The maximum difference in the API content between two subdivided halves was 165.18mg vs 133.83mg. If the amount of pellets in the tablet mass would drop below 13% on the basis of the pellet surface area, then the Ph. Eur. requirements would be exceeded. The amount of pellets in the tablet halves resulting in the greatest variability in API content was 38%. The results of this study indicate that the pellets were not distributed uniformly in the tablet mass. Thus, the uniformity of the dose in both halves of a tablet containing pellets cannot be based on the weight measurements i.e. it is necessary to develop further standards for tablet subdivision. Microtomographic methods are a very interesting alternative to expensive and time-consuming pharmacokinetic studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Halving It All: How Equally Shared Parenting Works.
ERIC Educational Resources Information Center
Deutsch, Francine M.
Noting that details of everyday life contribute to parental equality or inequality, this qualitative study focused on how couples transformed parental roles to create truly equal families. Participating in the study were 88 couples in 4 categories, based on division of parental responsibilities: equal sharers, 60-40 couples, 75-25 couples, and…
Shear-Panel Test Fixture Eliminates Corner Stresses
NASA Technical Reports Server (NTRS)
Kiss, J. J.; Farley, G. L.; Baker, D. J.
1984-01-01
New design eliminates corner stresses while maintaining uniform stress across panel. Shear panel test fixture includes eight frames and eight corner pins. Fixture assembled in two halves with shear panel sandwiched in between. Results generated from this fixture will result in good data base for design of efficient aircraft structures and other applications.
Diagrammatic expansion for positive density-response spectra: Application to the electron gas
NASA Astrophysics Data System (ADS)
Uimonen, A.-M.; Stefanucci, G.; Pavlyukh, Y.; van Leeuwen, R.
2015-03-01
In a recent paper [Phys. Rev. B 90, 115134 (2014), 10.1103/PhysRevB.90.115134] we put forward a diagrammatic expansion for the self-energy which guarantees the positivity of the spectral function. In this work we extend the theory to the density-response function. We write the generic diagram for the density-response spectrum as the sum of "partitions." In a partition the original diagram is evaluated using time-ordered Green's functions on the left half of the diagram, antitime-ordered Green's functions on the right half of the diagram, and lesser or greater Green's function gluing the two halves. As there exists more than one way to cut a diagram in two halves, to every diagram corresponds more than one partition. We recognize that the most convenient diagrammatic objects for constructing a theory of positive spectra are the half-diagrams. Diagrammatic approximations obtained by summing the squares of half-diagrams do indeed correspond to a combination of partitions which, by construction, yield a positive spectrum. We develop the theory using bare Green's functions and subsequently extend it to dressed Green's functions. We further prove a connection between the positivity of the spectral function and the analytic properties of the polarizability. The general theory is illustrated with several examples and then applied to solve the long-standing problem of including vertex corrections without altering the positivity of the spectrum. In fact already the first-order vertex diagram, relevant to the study of gradient expansion, Friedel oscillations, etc., leads to spectra which are negative in certain frequency domain. We find that the simplest approximation to cure this deficiency is given by the sum of the zeroth-order bubble diagram, the first-order vertex diagram, and a partition of the second-order ladder diagram. We evaluate this approximation in the three-dimensional homogeneous electron gas and show the positivity of the spectrum for all frequencies and densities.
Successional change in species composition alters climate sensitivity of grassland productivity.
Shi, Zheng; Lin, Yang; Wilcox, Kevin R; Souza, Lara; Jiang, Lifen; Jiang, Jiang; Jung, Chang Gyo; Xu, Xia; Yuan, Mengting; Guo, Xue; Wu, Liyou; Zhou, Jizhong; Luo, Yiqi
2018-05-31
Succession theory predicts altered sensitivity of ecosystem functions to disturbance (i.e., climate change) due to the temporal shift in plant community composition. However, empirical evidence in global change experiments is lacking to support this prediction. Here, we present findings from an 8-year long-term global change experiment with warming and altered precipitation manipulation (double and halved amount). First, we observed a temporal shift in species composition over 8 years, resulting in a transition from an annual C 3 -dominant plant community to a perennial C 4 -dominant plant community. This successional transition was independent of any experimental treatments. During the successional transition, the response of aboveground net primary productivity (ANPP) to precipitation addition magnified from neutral to +45.3%, while the response to halved precipitation attenuated substantially from -17.6% to neutral. However, warming did not affect ANPP in either state. The findings further reveal that the time-dependent climate sensitivity may be regulated by successional change in species composition, highlighting the importance of vegetation dynamics in regulating the response of ecosystem productivity to precipitation change. © 2018 John Wiley & Sons Ltd.
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
NASA Astrophysics Data System (ADS)
Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi
2018-01-01
On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.
Multichromosomal median and halving problems under different genomic distances
Tannier, Eric; Zheng, Chunfang; Sankoff, David
2009-01-01
Background Genome median and genome halving are combinatorial optimization problems that aim at reconstructing ancestral genomes as well as the evolutionary events leading from the ancestor to extant species. Exploring complexity issues is a first step towards devising efficient algorithms. The complexity of the median problem for unichromosomal genomes (permutations) has been settled for both the breakpoint distance and the reversal distance. Although the multichromosomal case has often been assumed to be a simple generalization of the unichromosomal case, it is also a relaxation so that complexity in this context does not follow from existing results, and is open for all distances. Results We settle here the complexity of several genome median and halving problems, including a surprising polynomial result for the breakpoint median and guided halving problems in genomes with circular and linear chromosomes, showing that the multichromosomal problem is actually easier than the unichromosomal problem. Still other variants of these problems are NP-complete, including the DCJ double distance problem, previously mentioned as an open question. We list the remaining open problems. Conclusion This theoretical study clears up a wide swathe of the algorithmical study of genome rearrangements with multiple multichromosomal genomes. PMID:19386099
Axtell, Stephen P; Russell, Scott M; Berman, Elliot
2006-04-01
This study was conducted to evaluate the effect of immersion chilling of broiler chicken carcasses in tap water (TAP) or TAP containing 50 ppm of monochloramine (MON) with respect to chloroform formation, total chlorine content, 2-thiobarbituric acid (TBA) values, and fatty acid profiles. Ten broiler chicken carcasses were chilled in TAP or MON for 6 h. After exposure, the carcasses were removed and cut in half along the median plane into right and left halves. After roasting the left halves, samples of the breast, thigh, and skin (with fat) were collected, subjected to fatty acid profiling, and assayed for chloroform, total chlorine, and TBA. The uncooked right halves of each carcass were stored at 4 degrees C for 10 days and then roasted. After roasting these right halves, samples of breast, thigh, and skin (with fat) were collected from each carcass half, subjected to fatty acid profiling, and assayed for chloroform, total chlorine, and TBA. There were no statistical differences between TAP- and MON-treated fresh or stored products with regard to chloroform levels, total chlorine content, TBA values, or fatty acid profiles.
Provably Secure Password-based Authentication in TLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdalla, Michel; Emmanuel, Bresson; Chevassut, Olivier
2005-12-20
In this paper, we show how to design an efficient, provably secure password-based authenticated key exchange mechanism specifically for the TLS (Transport Layer Security) protocol. The goal is to provide a technique that allows users to employ (short) passwords to securely identify themselves to servers. As our main contribution, we describe a new password-based technique for user authentication in TLS, called Simple Open Key Exchange (SOKE). Loosely speaking, the SOKE ciphersuites are unauthenticated Diffie-Hellman ciphersuites in which the client's Diffie-Hellman ephemeral public value is encrypted using a simple mask generation function. The mask is simply a constant value raised tomore » the power of (a hash of) the password.The SOKE ciphersuites, in advantage over previous pass-word-based authentication ciphersuites for TLS, combine the following features. First, SOKE has formal security arguments; the proof of security based on the computational Diffie-Hellman assumption is in the random oracle model, and holds for concurrent executions and for arbitrarily large password dictionaries. Second, SOKE is computationally efficient; in particular, it only needs operations in a sufficiently large prime-order subgroup for its Diffie-Hellman computations (no safe primes). Third, SOKE provides good protocol flexibility because the user identity and password are only required once a SOKE ciphersuite has actually been negotiated, and after the server has sent a server identity.« less
Turbine blade with spar and shell
Davies, Daniel O [Palm City, FL; Peterson, Ross H [Loxahatchee, FL
2012-04-24
A turbine blade with a spar and shell construction in which the spar and the shell are both secured within two platform halves. The spar and the shell each include outward extending ledges on the bottom ends that fit within grooves formed on the inner sides of the platform halves to secure the spar and the shell against radial movement when the two platform halves are joined. The shell is also secured to the spar by hooks extending from the shell that slide into grooves formed on the outer surface of the spar. The hooks form a serpentine flow cooling passage between the shell and the spar. The spar includes cooling holes on the lower end in the leading edge region to discharge cooling air supplied through the platform root and into the leading edge cooling channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka
We developed a prototype positron emission tomography (PET) system based on a new concept called Open-close PET, which has two modes: open and close-modes. In the open-mode, the detector ring is separated into two halved rings and subject is imaged with the open space and projection image is formed. In the close-mode, the detector ring is closed to be a regular circular ring, and the subject can be imaged without an open space, and so reconstructed images can be made without artifacts. The block detector of the Open-close PET system consists of two scintillator blocks that use two types ofmore » gadolinium orthosilicate (GSO) scintillators with different decay times, angled optical fiber-based image guides, and a flat panel photomultiplier tube. The GSO pixel size was 1.6 × 2.4 × 7 mm and 8 mm for fast (35 ns) and slow (60 ns) GSOs, respectively. These GSOs were arranged into an 11 × 15 matrix and optically coupled in the depth direction to form a depth-of-interaction detector. The angled optical fiber-based image guides were used to arrange the two scintillator blocks at 22.5° so that they can be arranged in a hexadecagonal shape with eight block detectors to simplify the reconstruction algorithm. The detector ring was divided into two halves to realize the open-mode and set on a mechanical stand with which the distance between the two parts can be manually changed. The spatial resolution in the close-mode was 2.4-mm FWHM, and the sensitivity was 1.7% at the center of the field-of-view. In both the close- and open-modes, we made sagittal (y-z plane) projection images between the two halved detector rings. We obtained reconstructed and projection images of {sup 18}F-NaF rat studies and proton-irradiated phantom images. These results indicate that our developed Open-close PET is useful for some applications such as proton therapy as well as other applications such as molecular imaging.« less
Design and implementation of encrypted and decrypted file system based on USBKey and hardware code
NASA Astrophysics Data System (ADS)
Wu, Kehe; Zhang, Yakun; Cui, Wenchao; Jiang, Ting
2017-05-01
To protect the privacy of sensitive data, an encrypted and decrypted file system based on USBKey and hardware code is designed and implemented in this paper. This system uses USBKey and hardware code to authenticate a user. We use random key to encrypt file with symmetric encryption algorithm and USBKey to encrypt random key with asymmetric encryption algorithm. At the same time, we use the MD5 algorithm to calculate the hash of file to verify its integrity. Experiment results show that large files can be encrypted and decrypted in a very short time. The system has high efficiency and ensures the security of documents.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
A DRM based on renewable broadcast encryption
NASA Astrophysics Data System (ADS)
Ramkumar, Mahalingam; Memon, Nasir
2005-07-01
We propose an architecture for digital rights management based on a renewable, random key pre-distribution (KPD) scheme, HARPS (hashed random preloaded subsets). The proposed architecture caters for broadcast encryption by a trusted authority (TA) and by "parent" devices (devices used by vendors who manufacture compliant devices) for periodic revocation of devices. The KPD also facilitates broadcast encryption by peer devices, which permits peers to distribute content, and efficiently control access to the content encryption secret using subscription secrets. The underlying KPD also caters for broadcast authentication and mutual authentication of any two devices, irrespective of the vendors manufacturing the device, and thus provides a comprehensive solution for securing interactions between devices taking part in a DRM system.
ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallberg, J.; Stagg, A.
2000-10-01
A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less
Asymmetric distances for binary embeddings.
Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana
2014-01-01
In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.
Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems
NASA Astrophysics Data System (ADS)
Jia, C. J.; Wang, Y.; Mendl, C. B.; Moritz, B.; Devereaux, T. P.
2018-03-01
We describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the "checkerboard" decomposition of the Hamiltonian matrix for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.
A fast exact simulation method for a class of Markov jump processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Recognition of functional sites in protein structures.
Shulman-Peleg, Alexandra; Nussinov, Ruth; Wolfson, Haim J
2004-06-04
Recognition of regions on the surface of one protein, that are similar to a binding site of another is crucial for the prediction of molecular interactions and for functional classifications. We first describe a novel method, SiteEngine, that assumes no sequence or fold similarities and is able to recognize proteins that have similar binding sites and may perform similar functions. We achieve high efficiency and speed by introducing a low-resolution surface representation via chemically important surface points, by hashing triangles of physico-chemical properties and by application of hierarchical scoring schemes for a thorough exploration of global and local similarities. We proceed to rigorously apply this method to functional site recognition in three possible ways: first, we search a given functional site on a large set of complete protein structures. Second, a potential functional site on a protein of interest is compared with known binding sites, to recognize similar features. Third, a complete protein structure is searched for the presence of an a priori unknown functional site, similar to known sites. Our method is robust and efficient enough to allow computationally demanding applications such as the first and the third. From the biological standpoint, the first application may identify secondary binding sites of drugs that may lead to side-effects. The third application finds new potential sites on the protein that may provide targets for drug design. Each of the three applications may aid in assigning a function and in classification of binding patterns. We highlight the advantages and disadvantages of each type of search, provide examples of large-scale searches of the entire Protein Data Base and make functional predictions.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
Compressively Characterizing High-Dimensional Entangled States with Complementary, Random Filtering
2016-06-30
halves of the SLM, respectively. The signal and idler fields are routed to separate digital micromirror devices (DMDs) via a 500-mm lens and a 50=50 beam...are a topic of future research. Figure 4(a) shows slices of the joint-position reconstruction along the signal axis, where each curve corresponds to...in Fig. 5 as a function of measurement number. Different curves correspond to increased levels of thresholding, setting values below a percentage of
Delving Deeper: One Cut, Two Halves, Three Questions
ERIC Educational Resources Information Center
Ren, Guanshen
2009-01-01
A square can be divided into two equal parts with any cut through the center. The first question that arises is, Would any cut through the center of a regular polygon divide it into two equal parts? If not, the second question is, What kind of lines through the center of the polygon would cut it into two halves? However, many objects are not…
ERIC Educational Resources Information Center
Taubert, Jessica; Parr, Lisa A.
2009-01-01
Humans are subject to the composite illusion: two identical top halves of a face are perceived as "different" when they are presented with different bottom halves. This observation suggests that when building a mental representation of a face, the underlying system perceives the whole face, and has difficulty decomposing facial features. We…
ERIC Educational Resources Information Center
McGrath, S.; Akoojee, Salim
2007-01-01
In July 2005, President Mbeki announced the launch of the Accelerated and Shared Growth Initiative for South Africa (AsgiSA), a new development strategy designed to help the South African state meet the ANC's 2004 election pledges, namely: (1) halve unemployment; (2) halve poverty; (3) accelerate employment equity; and (4) improve broad-based…
Frantz, C.E.; Cawley, W.E.
1961-05-01
A tool is described for cutting a coolant tube adapted to contain fuel elements to enable the tube to be removed from a graphite moderator mass. The tool splits the tube longitudinally into halves and curls the longitudinal edges of the halves inwardly so that they occupy less space and can be moved radially inwardly away from the walls of the hole in the graphite for easy removal from the graphite.
COPE AND DRAG PATTERNS, EACH IS USED AT SEPARATE TIMES ...
COPE AND DRAG PATTERNS, EACH IS USED AT SEPARATE TIMES TO CREATE INDIVIDUAL MOLD. HALVES FOR AN EXHAUST MANIFOLD CASTING SIT IN FRONT OF MATCHPLATE PATTERNS WITH BOTH COPE AND DRAG SIDES AFFIXED TO A SINGLE PLATE, USED TO CREATE BOTH MOLD HALVES AT THE SAME TIME, IN THE BACKGROUND. - Southern Ductile Casting Company, Mold Making, 2217 Carolina Avenue, Bessemer, Jefferson County, AL
A secure RFID authentication protocol for healthcare environments using elliptic curve cryptosystem.
Zhao, Zhenguo
2014-05-01
With the fast advancement of the wireless communication technology and the widespread use of medical systems, the radio frequency identification (RFID) technology has been widely used in healthcare environments. As the first important protocol for ensuring secure communication in healthcare environment, the RFID authentication protocols derive more and more attentions. Most of RFID authentication protocols are based on hash function or symmetric cryptography. To get more security properties, elliptic curve cryptosystem (ECC) has been used in the design of RFID authentication protocol. Recently, Liao and Hsiao proposed a new RFID authentication protocol using ECC and claimed their protocol could withstand various attacks. In this paper, we will show that their protocol suffers from the key compromise problem, i.e. an adversary could get the private key stored in the tag. To enhance the security, we propose a new RFID authentication protocol using ECC. Detailed analysis shows the proposed protocol not only could overcome weaknesses in Liao and Hsiao's protocol but also has the same performance. Therefore, it is more suitable for healthcare environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFSmore » and Frontier.« less
2007-09-26
From left: Data Parallel Line Relaxation (DPLR) software team members Kerry Trumble, Deepak Bose and David Hash analyze and predict the extreme environments NASA's space shuttle experiences during its super high-speed reentry into Earth’s atmosphere.
ERIC Educational Resources Information Center
Sims, Margaret
2011-01-01
The National Partnership Agreement for Indigenous Early Childhood Development (COAG 2008a) aims to halve the gap in mortality rates for Indigenous children under five within a decade, halve the gap for Indigenous students in reading, writing and numeracy within a decade, and ensure all Indigenous 4-year-olds have access to quality early childhood…
Jobs, Skills and Incomes in Ghana: How Was Poverty Halved?
ERIC Educational Resources Information Center
Nsowah-Nuamah, Nicholas; Teal, Francis; Awoonor-Williams, Moses
2012-01-01
On the basis of official statistics, poverty has halved in Ghana over the period from 1991 to 2005. Our objective in this paper is to assess how far this fall was linked to the creation of better paying jobs and the increase in education. We find that earnings rose rapidly in the period from 1998 to 2005, by 64% for men and by 55% for women. While…
Behavioral and neural evidence of increased attention to the bottom half of the face in deaf signers
Mitchell, Teresa V.; Letourneau, Susan M.; Maslin, Melissa T.
2013-01-01
Purpose This study examined the effects of deafness and sign language use on the distribution of attention across the top and bottom halves of faces. Methods In a composite face task, congenitally deaf signers and typically hearing controls made same/different judgments of the top or bottom halves of faces presented with the halves aligned or spatially misaligned, while event-related potentials (ERPs) were recorded. Results Both groups were more accurate when judging misaligned than aligned faces, which indicates holistic face processing. Misalignment affected all ERP components examined, with effects on the N170 resembling those of face inversion. Hearing adults were similarly accurate when judging the top and bottom halves of the faces, but deaf signers were more accurate when attending to the bottom than the top. Attending to the top elicited faster P1 and N170 latencies for both groups; within the deaf group, this effect was greatest for individuals who produced the highest accuracies when attending to the top. Conclusions These findings dovetail with previous research by providing behavioral and neural evidence of increased attention to the bottom half of the face in deaf signers, and by documenting that these effects generalize to a speeded task, in the absence of gaze shifts, with neutral facial expressions. PMID:23142816
Mitchell, Teresa V; Letourneau, Susan M; Maslin, Melissa C T
2013-01-01
This study examined the effects of deafness and sign language use on the distribution of attention across the top and bottom halves of faces. In a composite face task, congenitally deaf signers and typically hearing controls made same/different judgments of the top or bottom halves of faces presented with the halves aligned or spatially misaligned, while event-related potentials (ERPs) were recorded. Both groups were more accurate when judging misaligned than aligned faces, which indicates holistic face processing. Misalignment affected all ERP components examined, with effects on the N170 resembling those of face inversion. Hearing adults were similarly accurate when judging the top and bottom halves of the faces, but deaf signers were more accurate when attending to the bottom than the top. Attending to the top elicited faster P1 and N170 latencies for both groups; within the deaf group, this effect was greatest for individuals who produced the highest accuracies when attending to the top. These findings dovetail with previous research by providing behavioral and neural evidence of increased attention to the bottom half of the face in deaf signers, and by documenting that these effects generalize to a speeded task, in the absence of gaze shifts, with neutral facial expressions.
NASA Astrophysics Data System (ADS)
Kiktenko, E. O.; Pozhar, N. O.; Anufriev, M. N.; Trushechkin, A. S.; Yunusov, R. R.; Kurochkin, Y. V.; Lvovsky, A. I.; Fedorov, A. K.
2018-07-01
Blockchain is a distributed database which is cryptographically protected against malicious modifications. While promising for a wide range of applications, current blockchain platforms rely on digital signatures, which are vulnerable to attacks by means of quantum computers. The same, albeit to a lesser extent, applies to cryptographic hash functions that are used in preparing new blocks, so parties with access to quantum computation would have unfair advantage in procuring mining rewards. Here we propose a possible solution to the quantum era blockchain challenge and report an experimental realization of a quantum-safe blockchain platform that utilizes quantum key distribution across an urban fiber network for information-theoretically secure authentication. These results address important questions about realizability and scalability of quantum-safe blockchains for commercial and governmental applications.
A statistical learning strategy for closed-loop control of fluid flows
NASA Astrophysics Data System (ADS)
Guéniat, Florimond; Mathelin, Lionel; Hussaini, M. Yousuff
2016-12-01
This work discusses a closed-loop control strategy for complex systems utilizing scarce and streaming data. A discrete embedding space is first built using hash functions applied to the sensor measurements from which a Markov process model is derived, approximating the complex system's dynamics. A control strategy is then learned using reinforcement learning once rewards relevant with respect to the control objective are identified. This method is designed for experimental configurations, requiring no computations nor prior knowledge of the system, and enjoys intrinsic robustness. It is illustrated on two systems: the control of the transitions of a Lorenz'63 dynamical system, and the control of the drag of a cylinder flow. The method is shown to perform well.
Complex Envelope Properties, Interpretation, Filtering, and Evaluation
1991-02-01
BEHAVIOR The example of y(t) in (45) (when n# ± /2) illustrates the general rule that if a time function has a discontinuity of value D at time to, then...integration rule like trapezoidal, the "alias-free" interval in the time domain is approximately halved, as shown below. This does not necessarily mean that...approximated by the trapezoidal rule and the results added together. The two final approximations of interest come from sampling the results for causal real y(t
Tackling the challenges of matching biomedical ontologies.
Faria, Daniel; Pesquita, Catia; Mott, Isabela; Martins, Catarina; Couto, Francisco M; Cruz, Isabel F
2018-01-15
Biomedical ontologies pose several challenges to ontology matching due both to the complexity of the biomedical domain and to the characteristics of the ontologies themselves. The biomedical tracks in the Ontology Matching Evaluation Initiative (OAEI) have spurred the development of matching systems able to tackle these challenges, and benchmarked their general performance. In this study, we dissect the strategies employed by matching systems to tackle the challenges of matching biomedical ontologies and gauge the impact of the challenges themselves on matching performance, using the AgreementMakerLight (AML) system as the platform for this study. We demonstrate that the linear complexity of the hash-based searching strategy implemented by most state-of-the-art ontology matching systems is essential for matching large biomedical ontologies efficiently. We show that accounting for all lexical annotations (e.g., labels and synonyms) in biomedical ontologies leads to a substantial improvement in F-measure over using only the primary name, and that accounting for the reliability of different types of annotations generally also leads to a marked improvement. Finally, we show that cross-references are a reliable source of information and that, when using biomedical ontologies as background knowledge, it is generally more reliable to use them as mediators than to perform lexical expansion. We anticipate that translating traditional matching algorithms to the hash-based searching paradigm will be a critical direction for the future development of the field. Improving the evaluation carried out in the biomedical tracks of the OAEI will also be important, as without proper reference alignments there is only so much that can be ascertained about matching systems or strategies. Nevertheless, it is clear that, to tackle the various challenges posed by biomedical ontologies, ontology matching systems must be able to efficiently combine multiple strategies into a mature matching approach.
Gap junction- and hemichannel-independent actions of connexins.
Jiang, Jean X; Gu, Sumin
2005-06-10
Connexins have been known to be the protein building blocks of gap junctions and mediate cell-cell communication. In contrast to the conventional dogma, recent evidence suggests that in addition to forming gap junction channels, connexins possess gap junction-independent functions. One important gap junction-independent function for connexins is to serve as the major functional component for hemichannels, the un-apposed halves of gap junctions. Hemichannels, as independent functional units, play roles that are different from that of gap junctions in the cell. The other functions of connexins appear to be gap junction- and hemichannel-independent. Published studies implicate the latter functions of connexins in cell growth, differentiation, tumorigenicity, injury, and apoptosis, although the mechanistic aspects of these actions remain largely unknown. In this review, gap junction- and hemichannel-independent functions of connexins are summarized, and the molecular mechanisms underlying these connexin functions are speculated and discussed.
Friction Stir Welding of GR-Cop 84 for Combustion Chamber Liners
NASA Technical Reports Server (NTRS)
Russell, Carolyn K.; Carter, Robert; Ellis, David L.; Goudy, Richard
2004-01-01
GRCop-84 is a copper-chromium-niobium alloy developed by the Glenn Research Center for liquid rocket engine combustion chamber liners. GRCop-84 exhibits superior properties over conventional copper-base alloys in a liquid hydrogen-oxygen operating environment. The Next Generation Launch Technology program has funded a program to demonstrate scale-up production capabilities of GR-Cop 84 to levels suitable for main combustion chamber production for the prototype rocket engine. This paper describes a novel method of manufacturing the main combustion chamber liner. The process consists of several steps: extrude the GR-Cop 84 powder into billets, roll the billets into plates, bump form the plates into cylinder halves and friction stir weld the halves into a cylinder. The cylinder is then metal spun formed to near net liner dimensions followed by finish machining to the final configuration. This paper describes the friction stir weld process development including tooling and non-destructive inspection techniques, culminating in the successful production of a liner preform completed through spin forming.
Dual beam optical system for pulsed laser ablation film deposition
Mashburn, D.N.
1996-09-24
A laser ablation apparatus having a laser source outputting a laser ablation beam includes an ablation chamber having a sidewall, a beam divider for dividing the laser ablation beam into two substantially equal halves, and a pair of mirrors for converging the two halves on a surface of the target from complementary angles relative to the target surface normal, thereby generating a plume of ablated material emanating from the target. 3 figs.
Dual beam optical system for pulsed laser ablation film deposition
Mashburn, Douglas N.
1996-01-01
A laser ablation apparatus having a laser source outputting a laser ablation beam includes an ablation chamber having a sidewall, a beam divider for dividing the laser ablation beam into two substantially equal halves, and a pair of mirrors for converging the two halves on a surface of the target from complementary angles relative to the target surface normal, thereby generating a plume of ablated material emanating from the target.
Protection of Health Imagery by Region Based Lossless Reversible Watermarking Scheme
Priya, R. Lakshmi; Sadasivam, V.
2015-01-01
Providing authentication and integrity in medical images is a problem and this work proposes a new blind fragile region based lossless reversible watermarking technique to improve trustworthiness of medical images. The proposed technique embeds the watermark using a reversible least significant bit embedding scheme. The scheme combines hashing, compression, and digital signature techniques to create a content dependent watermark making use of compressed region of interest (ROI) for recovery of ROI as reported in literature. The experiments were carried out to prove the performance of the scheme and its assessment reveals that ROI is extracted in an intact manner and PSNR values obtained lead to realization that the presented scheme offers greater protection for health imageries. PMID:26649328
Privacy-Preserving Authentication of Users with Smart Cards Using One-Time Credentials
NASA Astrophysics Data System (ADS)
Park, Jun-Cheol
User privacy preservation is critical to prevent many sophisticated attacks that are based on the user's server access patterns and ID-related information. We propose a password-based user authentication scheme that provides strong privacy protection using one-time credentials. It eliminates the possibility of tracing a user's authentication history and hides the user's ID and password even from servers. In addition, it is resistant against user impersonation even if both a server's verification database and a user's smart card storage are disclosed. We also provide a revocation scheme for a user to promptly invalidate the user's credentials on a server when the user's smart card is compromised. The schemes use lightweight operations only such as computing hashes and bitwise XORs.
NASA Astrophysics Data System (ADS)
Ouargui, Ahmed; Belouaggadia, Naoual; Elbouari, Abdeslam; Ezzine, Mohammed
2018-05-01
Buildings are responsible for 36% of the final energy consumption in Morocco [1-2], and a reduction of this energy consumption of buildings is a priority for the kingdom in order to reach its energy saving goals. One of the most effective actions to reduce energy consumption is the selection and development of innovative and efficient building materials [3]. In this work, we present an experimental study of the effect of adding treated organic waste (paper, cardboard, hash) on mechanical and thermal properties of cement and clay bricks. Thermal conductivity, specific heat and mechanical resistance were investigated in terms of content and size additives. Soaking time and drying temperature were also taken into account. The results reveal that thermal conductivity decreases as well in the case of the paper-cement mixture as that of the paper-clay and seems to stabilize around 40%. In the case of the composite paper-cement, it is found that, for an additives quantity exceeding 15%, the compressive strength exceeds the standard for the hollow non-load bearing masonry. However, the case of paper-clay mixture seems to give more interesting results, related to the compressive strength, for a mass composition of 15% in paper. Given the positive results achieved, it seems possible to use these composites for the construction of walls, ceilings and roofs of housing while minimizing the energy consumption of the building.
2012-01-01
Background Chaos Game Representation (CGR) is an iterated function that bijectively maps discrete sequences into a continuous domain. As a result, discrete sequences can be object of statistical and topological analyses otherwise reserved to numerical systems. Characteristically, CGR coordinates of substrings sharing an L-long suffix will be located within 2-L distance of each other. In the two decades since its original proposal, CGR has been generalized beyond its original focus on genomic sequences and has been successfully applied to a wide range of problems in bioinformatics. This report explores the possibility that it can be further extended to approach algorithms that rely on discrete, graph-based representations. Results The exploratory analysis described here consisted of selecting foundational string problems and refactoring them using CGR-based algorithms. We found that CGR can take the role of suffix trees and emulate sophisticated string algorithms, efficiently solving exact and approximate string matching problems such as finding all palindromes and tandem repeats, and matching with mismatches. The common feature of these problems is that they use longest common extension (LCE) queries as subtasks of their procedures, which we show to have a constant time solution with CGR. Additionally, we show that CGR can be used as a rolling hash function within the Rabin-Karp algorithm. Conclusions The analysis of biological sequences relies on algorithmic foundations facing mounting challenges, both logistic (performance) and analytical (lack of unifying mathematical framework). CGR is found to provide the latter and to promise the former: graph-based data structures for sequence analysis operations are entailed by numerical-based data structures produced by CGR maps, providing a unifying analytical framework for a diversity of pattern matching problems. PMID:22551152
Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis.
Lee, Junghye; Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian
2018-04-13
There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. ©Junghye Lee, Jimeng Sun, Fei Wang, Shuang Wang, Chi-Hyuck Jun, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.
Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis
Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian
2018-01-01
Background There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. Objective The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. Methods We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. Results We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. Conclusions The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. PMID:29653917
3D animation of facial plastic surgery based on computer graphics
NASA Astrophysics Data System (ADS)
Zhang, Zonghua; Zhao, Yan
2013-12-01
More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.
Protein changes in leaf-sheath pulvini of barley (hordeum) induced by gravistimulation
NASA Technical Reports Server (NTRS)
Kaufman, P. B.; Song, I.
1984-01-01
Sodium dodecyl sulfate polyacrylamade gel electrophoresis (SDS-PAGE) pattern of salt soluble proteins elicite by gravistimulation were shown in the top and the bottom halves of the gravistimulated pulvini as follows: at least five proteins were increased in the tissues derived from the bottom halves of the pulvini in the approximate molecular weight range of 91, 57, 50, 22, 17 kilodatons. SDS densitometric scans indicated that the two of them are probably cellulase and invertase.
Mechanical end joint system for structural column elements
NASA Technical Reports Server (NTRS)
Bush, H. G.; Wallsom, R. E. (Inventor)
1982-01-01
A mechanical end joint system, useful for the transverse connection of strut elements to a common node, comprises a node joint half with a semicircular tongue and groove, and a strut joint half with a semicircular tongue and groove. The two joint halves are engaged transversely and the connection is made secure by the inherent physical property characteristics of locking latches and/or by a spring-actioned shaft. A quick release mechanism provides rapid disengagement of the joint halves.
Guided genome halving: hardness, heuristics and the history of the Hemiascomycetes.
Zheng, Chunfang; Zhu, Qian; Adam, Zaky; Sankoff, David
2008-07-01
Some present day species have incurred a whole genome doubling event in their evolutionary history, and this is reflected today in patterns of duplicated segments scattered throughout their chromosomes. These duplications may be used as data to 'halve' the genome, i.e. to reconstruct the ancestral genome at the moment of doubling, but the solution is often highly nonunique. To resolve this problem, we take account of outgroups, external reference genomes, to guide and narrow down the search. We improve on a previous, computationally costly, 'brute force' method by adapting the genome halving algorithm of El-Mabrouk and Sankoff so that it rapidly and accurately constructs an ancestor close the outgroups, prior to a local optimization heuristic. We apply this to reconstruct the predoubling ancestor of Saccharomyces cerevisiae and Candida glabrata, guided by the genomes of three other yeasts that diverged before the genome doubling event. We analyze the results in terms (1) of the minimum evolution criterion, (2) how close the genome halving result is to the final (local) minimum and (3) how close the final result is to an ancestor manually constructed by an expert with access to additional information. We also visualize the set of reconstructed ancestors using classic multidimensional scaling to see what aspects of the two doubled and three unduplicated genomes influence the differences among the reconstructions. The experimental software is available on request.
Shyi, Gary C-W; Wang, Chao-Chih
2016-01-01
The composite face task is one of the most popular research paradigms for measuring holistic processing of upright faces. The exact mechanism underlying holistic processing remains elusive and controversial, and some studies have suggested that holistic processing may not be evenly distributed, in that the top-half of a face might induce stronger holistic processing than its bottom-half counterpart. In two experiments, we further examined the possibility of asymmetric holistic processing. Prior to Experiment 1, we confirmed that perceptual discriminability was equated between top and bottom face halves; we found no differences in performance between top and bottom face halves when they were presented individually. Then, in Experiment 1, using the composite face task with the complete design to reduce response bias, we failed to obtain evidence that would support the notion of asymmetric holistic processing between top and bottom face halves. To further reduce performance variability and to remove lingering holistic effects observed in the misaligned condition in Experiment 1, we doubled the number of trials and increased misalignment between top and bottom face halves to make misalignment more salient in Experiment 2. Even with these additional manipulations, we were unable to find evidence indicative of asymmetric holistic processing. Taken together, these findings suggest that holistic processing is distributed homogenously within an upright face.
Security in the CernVM File System and the Frontier Distributed Database Caching System
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.
2014-06-01
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.
Protection of data carriers using secure optical codes
NASA Astrophysics Data System (ADS)
Peters, John A.; Schilling, Andreas; Staub, René; Tompkin, Wayne R.
2006-02-01
Smartcard technologies, combined with biometric-enabled access control systems, are required for many high-security government ID card programs. However, recent field trials with some of the most secure biometric systems have indicated that smartcards are still vulnerable to well equipped and highly motivated counterfeiters. In this paper, we present the Kinegram Secure Memory Technology which not only provides a first-level visual verification procedure, but also reinforces the existing chip-based security measures. This security concept involves the use of securely-coded data (stored in an optically variable device) which communicates with the encoded hashed information stored in the chip memory via a smartcard reader device.
Proof of cipher text ownership based on convergence encryption
NASA Astrophysics Data System (ADS)
Zhong, Weiwei; Liu, Zhusong
2017-08-01
Cloud storage systems save disk space and bandwidth through deduplication technology, but with the use of this technology has been targeted security attacks: the attacker can get the original file just use hash value to deceive the server to obtain the file ownership. In order to solve the above security problems and the different security requirements of cloud storage system files, an efficient information theory security proof of ownership scheme is proposed. This scheme protects the data through the convergence encryption method, and uses the improved block-level proof of ownership scheme, and can carry out block-level client deduplication to achieve efficient and secure cloud storage deduplication scheme.
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
76 FR 11433 - Federal Transition To Secure Hash Algorithm (SHA)-256
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... generating digital signatures. Current information systems, Web servers, applications and workstation operating systems were designed to process, and use SHA-1 generated signatures. National Institute of... cryptographic keys, and more robust algorithms by December 2013. Government systems may begin to encounter...
Zhang, Liping; Zhu, Shaohui; Tang, Shanyu
2017-03-01
Telecare medicine information systems (TMIS) provide flexible and convenient e-health care. However, the medical records transmitted in TMIS are exposed to unsecured public networks, so TMIS are more vulnerable to various types of security threats and attacks. To provide privacy protection for TMIS, a secure and efficient authenticated key agreement scheme is urgently needed to protect the sensitive medical data. Recently, Mishra et al. proposed a biometrics-based authenticated key agreement scheme for TMIS by using hash function and nonce, they claimed that their scheme could eliminate the security weaknesses of Yan et al.'s scheme and provide dynamic identity protection and user anonymity. In this paper, however, we demonstrate that Mishra et al.'s scheme suffers from replay attacks, man-in-the-middle attacks and fails to provide perfect forward secrecy. To overcome the weaknesses of Mishra et al.'s scheme, we then propose a three-factor authenticated key agreement scheme to enable the patient to enjoy the remote healthcare services via TMIS with privacy protection. The chaotic map-based cryptography is employed in the proposed scheme to achieve a delicate balance of security and performance. Security analysis demonstrates that the proposed scheme resists various attacks and provides several attractive security properties. Performance evaluation shows that the proposed scheme increases efficiency in comparison with other related schemes.
Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems
Jia, C. J.; Wang, Y.; Mendl, C. B.; ...
2017-12-02
Here, we describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the “checkerboard” decomposition of the Hamiltonian matrixmore » for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClanahan, Richard; De Leon, Phillip L.
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
McClanahan, Richard; De Leon, Phillip L.
2014-08-20
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, C. J.; Wang, Y.; Mendl, C. B.
Here, we describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the “checkerboard” decomposition of the Hamiltonian matrixmore » for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.« less
Development of a prototype Open-close positron emission tomography system
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Okumura, Satoshi; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Toshito, Toshiyuki; Komori, Masataka; Ogata, Yoshimune; Kato, Katsuhiko; Hatazawa, Jun
2015-08-01
We developed a prototype positron emission tomography (PET) system based on a new concept called Open-close PET, which has two modes: open and close-modes. In the open-mode, the detector ring is separated into two halved rings and subject is imaged with the open space and projection image is formed. In the close-mode, the detector ring is closed to be a regular circular ring, and the subject can be imaged without an open space, and so reconstructed images can be made without artifacts. The block detector of the Open-close PET system consists of two scintillator blocks that use two types of gadolinium orthosilicate (GSO) scintillators with different decay times, angled optical fiber-based image guides, and a flat panel photomultiplier tube. The GSO pixel size was 1.6 × 2.4 × 7 mm and 8 mm for fast (35 ns) and slow (60 ns) GSOs, respectively. These GSOs were arranged into an 11 × 15 matrix and optically coupled in the depth direction to form a depth-of-interaction detector. The angled optical fiber-based image guides were used to arrange the two scintillator blocks at 22.5° so that they can be arranged in a hexadecagonal shape with eight block detectors to simplify the reconstruction algorithm. The detector ring was divided into two halves to realize the open-mode and set on a mechanical stand with which the distance between the two parts can be manually changed. The spatial resolution in the close-mode was 2.4-mm FWHM, and the sensitivity was 1.7% at the center of the field-of-view. In both the close- and open-modes, we made sagittal (y-z plane) projection images between the two halved detector rings. We obtained reconstructed and projection images of 18F-NaF rat studies and proton-irradiated phantom images. These results indicate that our developed Open-close PET is useful for some applications such as proton therapy as well as other applications such as molecular imaging.
ERIC Educational Resources Information Center
Bracken, Paula
The OTIS Basic Index Access System (OBIAS) for searching the ERIC data base is described. This system offers two advantages over the previous system. First, search time has been halved, reducing the cost per search to an estimated $10 on a batch basis. Second, the "OTIS ERIC Descripter Catalog" which contains all descriptors used in the…
Classification of Encrypted Web Traffic Using Machine Learning Algorithms
2013-06-01
DPI devices to block certain websites; Yu, Cong, Chen, and Lei [52] suggest hashing the domains of pornographic and illegal websites so ISPs can...Zhenming Lei. “Blocking pornographic , illegal websites by internet host domain using FPGA and Bloom Filter”. Network Infrastructure and Digital Content
Inhibitors of SOD1 Interaction as an Approach to Slow the Progressive Spread of ALS Symptoms
2016-07-01
luciferase enzyme can be split into 2 halves. These 2 halves can be forced to reconstitute an active enzyme if they are brought together by some...force. In our assay, this force is the normal interaction that occurs when 2 individual SOD1 proteins come together to form a normal active enzyme ...Using recombinant DNA, we create fusion proteins of SOD1 and each half of the luciferase enzyme . In the past year, we have characterized and optimized
Schuster, Andrew; Skinner, Michael K; Yan, Wei
Exposure to the agricultural fungicide vinclozolin during gestation promotes a higher incidence of various diseases in the subsequent unexposed F3 and F4 generations. This phenomenon is termed epigenetic transgenerational inheritance and has been shown to in part involve alterations in DNA methylation, but the role of other epigenetic mechanisms remains unknown. The current study investigated the alterations in small noncoding RNA (sncRNA) in the sperm from F3 generation control and vinclozolin lineage rats. Over 200 differentially expressed sncRNAs were identified and the tRNA-derived sncRNAs, namely 5' halves of mature tRNAs (5' halves), displayed the most dramatic changes. Gene targets of the altered miRNAs and tRNA 5' halves revealed associations between the altered sncRNAs and differentially DNA methylated regions. Dysregulated sncRNAs appear to correlate with mRNA profiles associated with the previously observed vinclozolin-induced disease phenotypes. Data suggest potential connections between sperm-borne RNAs and the vinclozolin-induced epigenetic transgenerational inheritance phenomenon.
Effects of air, ozone, and nitrogen dioxide exposure on the oxidation of corn and soybean lipids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, R.I.; Csallany, A.S.
1978-01-01
Whole, halves and ground samples of soybean seeds and whole corn kernels were exposed to air, 15 ppm NO2, or 1.5 ppm O3 continuously for 100 h at room temperature. Lipid oxidation was measured by polyunsaturated fatty acid (PUFA) and tocopherol destruction and formation of fluorescent lipofuscin-like pigments. Exposure of whole soybean and corn seeds to air, 15 ppm NO2, or 1.5 ppm O3 was found to induce no PUFA and tocopherol destruction and no formation of lipofuscin-like pigments. Tocopherol and PUFA destruction and lipofuscin-like pigment formation were detected in samples of soybean seed halves exposed to 15 ppm NO2more » or 1.5 ppm O3; however, only tocopherol destruction occurred in soybean halves exposed to air. Ground soybean samples exposed to air, 15 ppm NO2, or 1.5 ppm O3 incurred the greatest PUFA and tocopherol destruction and lipofuscin-like pigment formation. 25 references, 3 figures, 4 tables.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... before May 12, 2011. ADDRESSES: Written comments may be sent to: Chief, Computer Security Division... FURTHER INFORMATION CONTACT: Elaine Barker, Computer Security Division, National Institute of Standards... Quynh Dang, Computer Security Division, National Institute of Standards and Technology, Gaithersburg, MD...
Implementation and Use of the Reference Analytics Module of LibAnswers
ERIC Educational Resources Information Center
Flatley, Robert; Jensen, Robert Bruce
2012-01-01
Academic libraries have traditionally collected reference statistics using hash marks on paper. Although efficient and simple, this method is not an effective way to capture the complexity of reference transactions. Several electronic tools are now available to assist libraries with collecting often elusive reference data--among them homegrown…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergen, Ben; Moss, Nicholas; Charest, Marc Robert Joseph
FleCSI is a compile-time configurable framework designed to support multi-physics application development. As such, FleCSI attempts to provide a very general set of infrastructure design patterns that can be specialized and extended to suit the needs of a broad variety of solver and data requirements. Current support includes multi-dimensional mesh topology, mesh geometry, and mesh adjacency information, n-dimensional hashed-tree data structures, graph partitioning interfaces, and dependency closures. FleCSI also introduces a functional programming model with control, execution, and data abstractions that are consistent with both MPI and state-of-the-art task-based runtimes such as Legion and Charm++. The FleCSI abstraction layer providesmore » the developer with insulation from the underlying runtime, while allowing support for multiple runtime systems, including conventional models like asynchronous MPI. The intent is to give developers a concrete set of user-friendly programming tools that can be used now, while allowing flexibility in choosing runtime implementations and optimizations that can be applied to architectures and runtimes that arise in the future. The control and execution models in FleCSI also provide formal nomenclature for describing poorly understood concepts like kernels and tasks.« less
Study on the security of the authentication scheme with key recycling in QKD
NASA Astrophysics Data System (ADS)
Li, Qiong; Zhao, Qiang; Le, Dan; Niu, Xiamu
2016-09-01
In quantum key distribution (QKD), the information theoretically secure authentication is necessary to guarantee the integrity and authenticity of the exchanged information over the classical channel. In order to reduce the key consumption, the authentication scheme with key recycling (KR), in which a secret but fixed hash function is used for multiple messages while each tag is encrypted with a one-time pad (OTP), is preferred in QKD. Based on the assumption that the OTP key is perfect, the security of the authentication scheme has be proved. However, the OTP key of authentication in a practical QKD system is not perfect. How the imperfect OTP affects the security of authentication scheme with KR is analyzed thoroughly in this paper. In a practical QKD, the information of the OTP key resulting from QKD is partially leaked to the adversary. Although the information leakage is usually so little to be neglected, it will lead to the increasing degraded security of the authentication scheme as the system runs continuously. Both our theoretical analysis and simulation results demonstrate that the security level of authentication scheme with KR, mainly indicated by its substitution probability, degrades exponentially in the number of rounds and gradually diminishes to zero.
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.
Sochat, Vanessa V; Prybol, Cameron J; Kurtzer, Gregory M
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers.
Live chat alternative security protocol
NASA Astrophysics Data System (ADS)
Rahman, J. P. R.; Nugraha, E.; Febriany, A.
2018-05-01
Indonesia is one of the largest e-commerce markets in Southeast Asia, as many as 5 million people do transactions in e-commerce, therefore more and more people use live chat service to communicate with customer service. In live chat, the customer service often asks customers’ data such as, full name, address, e-mail, transaction id, which aims to verify the purchase of the product. One of the risks that will happen is sniffing which will lead to the theft of confidential information that will cause huge losses to the customer. The anticipation that will be done is build an alternative security protocol for user interaction in live chat by using a cryptographic algorithm that is useful for protecting confidential messages. Live chat requires confidentiality and data integration with encryption and hash functions. The used algorithm are Rijndael 256 bits, RSA, and SHA256. To increase the complexity, the Rijndael algorithm will be modified in the S-box and ShiftRow sections based on the shannon principle rule, the results show that all pass the Randomness test, but the modification in Shiftrow indicates a better avalanche effect. Therefore the message will be difficult to be stolen or changed.
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers
Prybol, Cameron J.; Kurtzer, Gregory M.
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub’s primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers. PMID:29186161
Modulation of the composite face effect by unintended emotion cues.
Gray, Katie L H; Murphy, Jennifer; Marsh, Jade E; Cook, Richard
2017-04-01
When upper and lower regions from different emotionless faces are aligned to form a facial composite, observers 'fuse' the two halves together, perceptually. The illusory distortion induced by task-irrelevant ('distractor') halves hinders participants' judgements about task-relevant ('target') halves. This composite-face effect reveals a tendency to integrate feature information from disparate regions of intact upright faces, consistent with theories of holistic face processing. However, observers frequently perceive emotion in ostensibly neutral faces, contrary to the intentions of experimenters. This study sought to determine whether this 'perceived emotion' influences the composite-face effect. In our first experiment, we confirmed that the composite effect grows stronger as the strength of distractor emotion increased. Critically, effects of distractor emotion were induced by weak emotion intensities, and were incidental insofar as emotion cues hindered image matching, not emotion labelling per se . In Experiment 2, we found a correlation between the presence of perceived emotion in a set of ostensibly neutral distractor regions sourced from commonly used face databases, and the strength of illusory distortion they induced. In Experiment 3, participants completed a sequential matching composite task in which half of the distractor regions were rated high and low for perceived emotion, respectively. Significantly stronger composite effects were induced by the high-emotion distractor halves. These convergent results suggest that perceived emotion increases the strength of the composite-face effect induced by supposedly emotionless faces. These findings have important implications for the study of holistic face processing in typical and atypical populations.
Akita, K; Shimokawa, T; Tsunoda, A; Sato, T
2002-05-01
A nervous branch which passes through a small canal in the sphenozygomatic suture is sometimes observed during dissection. To examine the origin, course and distribution of this nervous branch, 42 head halves of 21 Japanese cadavers (11 males, 10 females) and 142 head halves of 71 human dry skulls were used. The branch was observed in seven sides (16.7%); it originated from the communication between the lacrimal nerve and the zygomaticotemporal branch of the zygomatic nerve or from the trunk of the zygomatic nerve. In two head halves (4.8%), the branch pierced the anterior part of the temporalis muscle during its course to the skin of the anterior part of the temple. The small canal in the suture was observed in 31 head halves (21.8%) of the dry skulls. Although this nervous branch is inconstantly observed, it should be called the temporal branch of the zygomatic nerve according to the constant positional relationship to the sphenoid and zygomatic bones. According to its origin, course and distribution, this nervous branch may be considered to be influential in zygomatic and retro-orbital pain due to entrapment and tension from the temporalis muscle and/or the narrow bony canal. The French version of this article is available in the form of electronic supplementary material and can be obtained by using the Springer LINK server located at http://dx.doi.org/10.1007/s00276-002-0027-4.
Bhardwaj, Anuj; Velmurugan, Natanasabapathy; Ballal, Suma
2013-01-01
Present study evaluated the efficacy of natural derivative irrigants, Morinda citrifolia juice (MCJ), Aloe Vera and Propolis in comparison to 1% sodium hypochlorite with passive ultrasonic irrigation for removal of the intraradicular E. faecalis biofilms in extracted single rooted human permanent teeth. Biofilms of E. faecalis were grown on the prepared root canal walls of 60 standardized root halves which were longitudinally sectioned. These root halves were re-approximated and the samples were divided into five groups of twelve each. The groups were, Group A (1% NaOCl), Group B (MCJ), Group C (Aloe vera), Group D (Propolis) and Group E (Saline). These groups were treated with passive ultrasonic irrigation (PUI) along with the respective irrigants. The root halves were processed for scanning electron microscopy. Three images (X2.5), coronal, middle and apical, were taken for the twelve root halves in each of the five groups. The images were randomized and biofilm coverage assessed independently by three calibrated examiners, using a four-point scoring system. 1% NaOCl with passive ultrasonic irrigation (PUI) was effective in completely removing E. faecalis biofilm and was superior to the natural irrigants like MCJ, Aloe vera and Propolis tested in this study. 1% NaOCl used along with passive ultrasonic irrigation was effective in completely removing E. faecalis biofilm when compared to natural irrigants (MCJ, Aloe Vera and Propolis).
Dentine Tubule Occlusion by Novel Bioactive Glass-Based Toothpastes
Hill, Robert G.; Chen, Xiaojing
2018-01-01
There are numerous over-the-counter (OTC) and professionally applied (in-office) products and techniques currently available for the treatment of dentine hypersensitivity (DH), but more recently, the use of bioactive glasses in toothpaste formulations have been advocated as a possible solution to managing DH. Aim. The aim of the present study, therefore, was to compare several bioactive glass formulations to investigate their effectiveness in an established in vitro model. Materials and Methods. A 45S5 glass was synthesized in the laboratory together with several other glass formulations: (1) a mixed glass (fluoride and chloride), (2) BioMinF, (3) a chloride glass, and (4) an amorphous chloride glass. The glass powders were formulated into five different toothpaste formulations. Dentine discs were sectioned from extracted human teeth and prepared for the investigation by removing the cutting debris (smear layer) following sectioning using a 6% citric acid solution for 2 minutes. Each disc was halved to provide test and control halves for comparison following the brushing of the five toothpaste formulations onto the test halves for each toothpaste group. Following the toothpaste application, the test discs were immersed in either artificial saliva or exposed to an acid challenge. Results. The dentine samples were analyzed using scanning electron microscopy (SEM), and observation of the SEM images indicated that there was good surface coverage following artificial saliva immersion. Furthermore, although the acid challenge removed the hydroxyapatite layer on the dentine surface for most of the samples, except for the amorphous chloride glass, there was evidence of tubular occlusion in the dentine tubules. Conclusions. The conclusions from the study would suggest that the inclusion of bioactive glass into a toothpaste formulation may be an effective approach to treat DH. PMID:29849637
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
Manufacturing Methods and Engineering for TFT Addressed Display.
1980-02-20
type required for the Army’s DMD (Digital Message Device), based on an active-matrix addressed electroluminescent display previously developed by...electroluminescent phosphor as the light emitter, and finally packaging or encapsulation. Because of size limitations of the pilot manufacturing facility, the DMD ...display was designed as two identical halves, which were then to be made individually in the auto- mated machine and later assembled into a single DMD
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
Wallace, A. C.; Borkakoti, N.; Thornton, J. M.
1997-01-01
It is well established that sequence templates such as those in the PROSITE and PRINTS databases are powerful tools for predicting the biological function and tertiary structure for newly derived protein sequences. The number of X-ray and NMR protein structures is increasing rapidly and it is apparent that a 3D equivalent of the sequence templates is needed. Here, we describe an algorithm called TESS that automatically derives 3D templates from structures deposited in the Brookhaven Protein Data Bank. While a new sequence can be searched for sequence patterns, a new structure can be scanned against these 3D templates to identify functional sites. As examples, 3D templates are derived for enzymes with an O-His-O "catalytic triad" and for the ribonucleases and lysozymes. When these 3D templates are applied to a large data set of nonidentical proteins, several interesting hits are located. This suggests that the development of a 3D template database may help to identify the function of new protein structures, if unknown, as well as to design proteins with specific functions. PMID:9385633
Pickett, P.T.
A hollow fitting for use in gas spectrometry leak testing of conduit joints is divided into two generally symmetrical halves along the axis of the conduit. A clip may quickly and easily fasten and unfasten the halves around the conduit joint under test. Each end of the fitting is sealable with a yieldable material, such as a piece of foam rubber. An orifice is provided in a wall of the fitting for the insertion or detection of helium during testing. One half of the fitting also may be employed to test joints mounted against a surface.
Pickett, Patrick T.
1981-01-01
A hollow fitting for use in gas spectrometry leak testing of conduit joints is divided into two generally symmetrical halves along the axis of the conduit. A clip may quickly and easily fasten and unfasten the halves around the conduit joint under test. Each end of the fitting is sealable with a yieldable material, such as a piece of foam rubber. An orifice is provided in a wall of the fitting for the insertion or detection of helium during testing. One half of the fitting also may be employed to test joints mounted against a surface.
Long-Range Big Quantum-Data Transmission.
Zwerger, M; Pirker, A; Dunjko, V; Briegel, H J; Dür, W
2018-01-19
We introduce an alternative type of quantum repeater for long-range quantum communication with improved scaling with the distance. We show that by employing hashing, a deterministic entanglement distillation protocol with one-way communication, one obtains a scalable scheme that allows one to reach arbitrary distances, with constant overhead in resources per repeater station, and ultrahigh rates. In practical terms, we show that, also with moderate resources of a few hundred qubits at each repeater station, one can reach intercontinental distances. At the same time, a measurement-based implementation allows one to tolerate high loss but also operational and memory errors of the order of several percent per qubit. This opens the way for long-distance communication of big quantum data.
Long-Range Big Quantum-Data Transmission
NASA Astrophysics Data System (ADS)
Zwerger, M.; Pirker, A.; Dunjko, V.; Briegel, H. J.; Dür, W.
2018-01-01
We introduce an alternative type of quantum repeater for long-range quantum communication with improved scaling with the distance. We show that by employing hashing, a deterministic entanglement distillation protocol with one-way communication, one obtains a scalable scheme that allows one to reach arbitrary distances, with constant overhead in resources per repeater station, and ultrahigh rates. In practical terms, we show that, also with moderate resources of a few hundred qubits at each repeater station, one can reach intercontinental distances. At the same time, a measurement-based implementation allows one to tolerate high loss but also operational and memory errors of the order of several percent per qubit. This opens the way for long-distance communication of big quantum data.
A fingerprint key binding algorithm based on vector quantization and error correction
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
Web-based Electronic Sharing and RE-allocation of Assets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverett, Dave; Miller, Robert A.; Berlin, Gary J.
2002-09-09
The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less
Crypto-Watermarking of Transmitted Medical Images.
Al-Haj, Ali; Mohammad, Ahmad; Amer, Alaa'
2017-02-01
Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images' region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient's data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.
Resource-Efficient Data-Intensive System Designs for High Performance and Capacity
2015-09-01
76, 79, 80, and 81.] [9] Anirudh Badam, KyoungSoo Park, Vivek S. Pai, and Larry L. Peterson. HashCache: cache storage for the next billion. In Proc...Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows , Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: A
2013-10-01
1_doc_RCData_612|virus|Trojan|rootkit|worm|Prolaco|rundll|msiexec|google|wmimngr|jusche d|wfmngr|wupmgr| java |wpmgr|nvscpapisvr)’ ” results in the...hashdump Dumps passwords hashes (LM/NTLM) from memory hibinfo Dump hibernation file information hivedump Prints out a hive hivelist Print list of
Code of Federal Regulations, 2011 CFR
2011-07-01
... machines. (3) All occupations involved in tankage or rendering of dead animals, animal offal, animal fats..., and hashing machines; and presses (except belly-rolling machines). Except, the provisions of this.... Rendering plants means establishments engaged in the conversion of dead animals, animal offal, animal fats...
Compact modalities for forward-error correction
NASA Astrophysics Data System (ADS)
Fang, Dejian
2013-10-01
Hash tables [1] must work. In fact, few leading analysts would disagree with the refinement of thin clients. In our research, we disprove not only that the infamous read-write algorithm for the exploration of object-oriented languages by W. White et al. is NP-complete, but that the same is true for the lookaside buffer.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-06
... hash algorithms in many computer network applications. On February 11, 2011, NIST published a notice in... Information Security Management Act (FISMA) of 2002 (Pub. L. 107-347), the Secretary of Commerce is authorized to approve Federal Information Processing Standards (FIPS). NIST activities to develop computer...
ERIC Educational Resources Information Center
Kelsey, Louise; Uytterhoeven, Lise
2017-01-01
This paper reports on a focused collaborative learning and teaching research project between the Dance Department at Middlesex University and partner institution London Studio Centre. Informed by Belinda Allen's research on creative curriculum design, dance students and lecturers shared innovative learning opportunities to enhance the development…
Compagnucci, Claudia; Fish, Jennifer; Depew, Michael J
2014-06-01
Much of the gnathostome (jawed vertebrate) evolutionary radiation was dependent on the ability to sense and interpret the environment and subsequently act upon this information through utilization of a specialized mode of feeding involving the jaws. While the gnathostome skull, reflective of the vertebrate baüplan, typically is bilaterally symmetric with right (dextral) and left (sinistral) halves essentially representing mirror images along the midline, both adaptive and abnormal asymmetries have appeared. Herein we provide a basic primer on studies of the asymmetric development of the gnathostome skull, touching briefly on asymmetry as a field of study, then describing the nature of cranial development and finally underscoring evolutionary and functional aspects of left-right asymmetric cephalic development. © 2014 Wiley Periodicals, Inc.
True randomness from an incoherent source
NASA Astrophysics Data System (ADS)
Qi, Bing
2017-11-01
Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.
Duarte, Sofia; Cássio, Fernanda; Ferreira, Verónica; Canhoto, Cristina; Pascoal, Cláudia
2016-08-01
Ongoing climate change is expected to affect the diversity and activity of aquatic microbes, which play a key role in plant litter decomposition in forest streams. We used a before-after control-impact (BACI) design to study the effects of warming on a forest stream reach. The stream reach was divided by a longitudinal barrier, and during 1 year (ambient year) both stream halves were at ambient temperature, while in the second year (warmed year) the temperature in one stream half was increased by ca. 3 °C above ambient temperature (experimental half). Fine-mesh bags containing oak (Quercus robur L.) leaves were immersed in both stream halves for up to 60 days in spring and autumn of the ambient and warmed years. We assessed leaf-associated microbial diversity by denaturing gradient gel electrophoresis and identification of fungal conidial morphotypes and microbial activity by quantifying leaf mass loss and productivity of fungi and bacteria. In the ambient year, no differences were found in leaf decomposition rates and microbial productivities either between seasons or stream halves. In the warmed year, phosphorus concentration in the stream water, leaf decomposition rates, and productivity of bacteria were higher in spring than in autumn. They did not differ between stream halves, except for leaf decomposition, which was higher in the experimental half in spring. Fungal and bacterial communities differed between seasons in both years. Seasonal changes in stream water variables had a greater impact on the activity and diversity of microbial decomposers than a warming regime simulating a predicted global warming scenario.
Physical and Physiological Demands of Elite Rugby Union Officials.
Blair, Matthew R; Elsworthy, Nathan; Rehrer, Nancy J; Button, Chris; Gill, Nicholas D
2018-04-13
To examine the movement and physiological demands of rugby union officiating within elite competition. Movement demands of 9 elite officials across 12 Super Rugby matches were calculated, using global positioning system devices. Total distance (m), relative distance (m·min -1 ), percentage time spent within various speed zones were calculated across a match. Heart rate responses were also recorded throughout each match. Cohen d effect sizes were reported to examine the within match variations. The total distance covered was 8,030 ± 506 m, with a relative distance of 83 ± 5 m·min -1 and with no differences observed between halves. Most game time was spent at lower movement speeds (76 ± 2%; <2.0 m·s -1 ), with large effects for time spent >7.0 m·s -1 between halves (d=2.85). Mean heart rate was 154 ± 10 b·min -1 (83.8 ± 2.9% HR max ), with no differences observed between the first and second halves. Most game time was spent between 81-90% HR max (40.5 ± 7.5%) with no observable differences between halves. Distances covered above 5.1 m·s -1 were highest during the first 10 minutes of a match, while distance at speeds 3.7-5 m·s -1 decreased during the final 10 minutes of play. These findings highlight the highly demanding and intermittent nature of rugby union officiating, with only some minor variations in physical and physiological demands across a match. These results have implications for the physical preparation of professional rugby union referees.
Branciamore, Sergio; Di Giulio, Massimo
2012-04-01
The secondary structure of the 5S ribosomal RNA (5S rRNA) molecule shows a high degree of symmetry. In order to explain the origin of this symmetry, it has been conjectured that one half of the 5S rRNA molecule was its precursor and that an indirect duplication of this precursor created the other half and thus the current symmetry of the molecule. Here, we have subjected to an empirical test both the indirect duplication model, analysing a total of 684 5S rRNA sequences for complementarity between the two halves of the 5S rRNA, and the direct duplication model analysing in this case the similarity between the two halves of this molecule. In intra- and inter-molecule and intra- and inter-domain comparisons, we find a high statistical support to the hypothesis of a complementarity relationship between the two halves of the 5S rRNA molecule, denying vice versa the hypothesis of similarity between these halves. Therefore, these observations corroborate the indirect duplication model at the expense of the direct duplication model, as reason of the origin of the 5S rRNA molecule. More generally, we discuss and favour the hypothesis that all RNAs and proteins, which present symmetry, did so through gene duplication and not by gradualistic accumulation of few monomers or segments of molecule into a gradualistic growth process. This would be the consequence of the very high propensity that nucleic acids have to be subjected to duplications.
On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos
2015-01-01
For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Wirelessly Interrogated Wear or Temperature Sensors; Processing Nanostructured Sensors Using Microfabrication Techniques; Optical Pointing Sensor; Radio-Frequency Tank Eigenmode Sensor for Propellant Quantity Gauging; High-Temperature Optical Sensor; Integral Battery Power Limiting Circuit for Intrinsically Safe Applications; Configurable Multi-Purpose Processor; Squeezing Alters Frequency Tuning of WGM Optical Resonator; Automated Computer Access Request System; Range Safety for an Autonomous Flight Safety System; Fast and Easy Searching of Files in Unisys 2200 Computers; Parachute Drag Model; Evolutionary Scheduler for the Deep Space Network; Modular Habitats Comprising Rigid and Inflatable Modules; More About N2O-Based Propulsion and Breathable-Gas Systems; Ultrasonic/Sonic Rotary-Hammer Drills; Miniature Piezoelectric Shaker for Distribution of Unconsolidated Samples to Instrument Cells; Lunar Soil Particle Separator; Advanced Aerobots for Scientific Exploration; Miniature Bioreactor System for Long-Term Cell Culture; Electrochemical Detection of Multiple Bioprocess Analytes; Fabrication and Modification of Nanoporous Silicon Particles; High-Altitude Hydration System; Photon Counting Using Edge-Detection Algorithm; Holographic Vortex Coronagraph; Optical Structural Health Monitoring Device; Fuel-Cell Power Source Based on Onboard Rocket Propellants; Polar Lunar Regions: Exploiting Natural and Augmented Thermal Environments; Simultaneous Spectral Temporal Adaptive Raman Spectrometer - SSTARS; Improved Speed and Functionality of a 580-GHz Imaging Radar; Bolometric Device Based on Fluxoid Quantization; Algorithms for Learning Preferences for Sets of Objects; Model for Simulating a Spiral Software-Development Process; Algorithm That Synthesizes Other Algorithms for Hashing; Algorithms for High-Speed Noninvasive Eye-Tracking System; and Adapting ASPEN for Orbital Express.
Non-symbolic halving in an Amazonian indigene group
McCrink, Koleen; Spelke, Elizabeth S.; Dehaene, Stanislas; Pica, Pierre
2014-01-01
Much research supports the existence of an Approximate Number System (ANS) that is recruited by infants, children, adults, and non-human animals to generate coarse, non-symbolic representations of number. This system supports simple arithmetic operations such as addition, subtraction, and ordering of amounts. The current study tests whether an intuition of a more complex calculation, division, exists in an indigene group in the Amazon, the Mundurucu, whose language includes no words for large numbers. Mundurucu children were presented with a video event depicting a division transformation of halving, in which pairs of objects turned into single objects, reducing the array's numerical magnitude. Then they were tested on their ability to calculate the outcome of this division transformation with other large-number arrays. The Mundurucu children effected this transformation even when non-numerical variables were controlled, performed above chance levels on the very first set of test trials, and exhibited performance similar to urban children who had access to precise number words and a surrounding symbolic culture. We conclude that a halving calculation is part of the suite of intuitive operations supported by the ANS. PMID:23587042
Children's multiplicative transformations of discrete and continuous quantities.
Barth, Hilary; Baron, Andrew; Spelke, Elizabeth; Carey, Susan
2009-08-01
Recent studies have documented an evolutionarily primitive, early emerging cognitive system for the mental representation of numerical quantity (the analog magnitude system). Studies with nonhuman primates, human infants, and preschoolers have shown this system to support computations of numerical ordering, addition, and subtraction involving whole number concepts prior to arithmetic training. Here we report evidence that this system supports children's predictions about the outcomes of halving and perhaps also doubling transformations. A total of 138 kindergartners and first graders were asked to reason about the quantity resulting from the doubling or halving of an initial numerosity (of a set of dots) or an initial length (of a bar). Controls for dot size, total dot area, and dot density ensured that children were responding to the number of dots in the arrays. Prior to formal instruction in symbolic multiplication, division, or rational number, halving (and perhaps doubling) computations appear to be deployed over discrete and possibly continuous quantities. The ability to apply simple multiplicative transformations to analog magnitude representations of quantity may form a part of the toolkit that children use to construct later concepts of rational number.
Schuster, Andrew; Skinner, Michael K.; Yan, Wei
2016-01-01
Abstract Exposure to the agricultural fungicide vinclozolin during gestation promotes a higher incidence of various diseases in the subsequent unexposed F3 and F4 generations. This phenomenon is termed epigenetic transgenerational inheritance and has been shown to in part involve alterations in DNA methylation, but the role of other epigenetic mechanisms remains unknown. The current study investigated the alterations in small noncoding RNA (sncRNA) in the sperm from F3 generation control and vinclozolin lineage rats. Over 200 differentially expressed sncRNAs were identified and the tRNA-derived sncRNAs, namely 5′ halves of mature tRNAs (5′ halves), displayed the most dramatic changes. Gene targets of the altered miRNAs and tRNA 5′ halves revealed associations between the altered sncRNAs and differentially DNA methylated regions. Dysregulated sncRNAs appear to correlate with mRNA profiles associated with the previously observed vinclozolin-induced disease phenotypes. Data suggest potential connections between sperm-borne RNAs and the vinclozolin-induced epigenetic transgenerational inheritance phenomenon. PMID:27390623
Saura, Domingo; Vegara, Salud; Martí, Nuria; Valero, Manuel; Laencina, José
2017-06-01
Non-enzymatic browning (NEB) in canned peach halves in syrup during storage was investigated. Absorbance at 420 nm ( A 420 ), colorimetric parameters (CIE Lab , TCD and La / b ), fructose, glucose and sucrose, total sugar, organic acids, ascorbic acid (AA), dehydroascorbic acid, and 2,3-diketogulonic acid were used to estimate the extent of NEB during 1 year of storage at 30 °C and the relationships between each of these parameters and A 420 were established. The investigation was carried out to explore the possibility of replacing the E330 commonly used as acidifier by turbid or clarified lemon juice (TLJ or CLJ) to obtain a product having good nutrition with better retention of quality. The a , La / b , glucose and fructose were positively correlated with A 420 and all proved to be good indicators of browning development. Overall results showed that replacement of acidifier E330 with CLJ for controlling pH in canned peach halves in syrup had some advantages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charman, H.P.; White, M.H.; Rahman, R.
1976-01-01
The major internal protein, p30, of rat type C virus (RaLV) was purified and utilized to establish intra- and interspecies radioimmunoassays. Three rat viruses were compared in homologous and heterologous intraspecies assays with no evidence of type specificity. The only heterologous viruses to give inhibition in these species assays were the feline (FeLV) and hamster (HaLV) type C viruses; these reactions were incomplete and required high virus concentrations. An interspecies assay using a goat antiserum prepared after sequentially immunizing with FeLV, RD 114, and woolly monkey virus p30's and labeled RaLV p30 was inhibited by all mammalian type C viruses,more » although preferentially by RaLV, FeLV, and HaLV. Thus, as in a previously reported assay developed with HaLV p30, rat, hamster, and cat p30's seem more closely related to each other than to mouse type C virus p30. High levels of specific antigen were found in all cell lines producing rat virus, whereas embryonic tissues from several rat strains and cell lines considered virus-free based on other tests were negative for p30. Rats bearing tumors containing Moloney murine sarcoma virus (RaLV) did not contain free circulating antibody to RaLV p30. Fifty-one human tumor extracts (including two tumor cell lines) were tested for activity in the RaLV species and 47 in the interspecies assays after Sephadex gel filtration and pooling of material in the 15,000- to 40,000-molecular-weight range. At a sensitivity level of 7 ng/ml (0.7 ng/assay) in the interspecies assay, all human tissues, with one exception, were negative. The one positive result is considered nonspecific based on proteolysis of the labeled antigen. Input tissue protein of the purified tumor extracts averaged 1.9 mg/ml with a range of less than 0.025 to 22 mg/ml. Tissues from NIH Swiss mice processed in the same manner were positive in the interspecies assay but negative in the intraspecies RaLV assay. (auth)« less
Electromagnetic pump stator coil
Fanning, A.W.; Dahl, L.R.
1996-06-25
An electrical stator coil for an electromagnetic pump includes a continuous conductor strip having first and second terminals at opposite ends thereof and an intermediate section disposed therebetween. The strip is configured in first and second coil halves, with the first coil half including a plurality of windings extending from the first terminal to the intermediate section, and the second coil half including a plurality of windings extending from the second terminal to the intermediate section. The first and second coil halves are disposed coaxially, and the first and second terminals are disposed radially inwardly therefrom with the intermediate section being disposed radially outwardly therefrom. 9 figs.
Electromagnetic pump stator coil
Fanning, Alan W.; Dahl, Leslie R.
1996-01-01
An electrical stator coil for an electromagnetic pump includes a continuous conductor strip having first and second terminals at opposite ends thereof and an intermediate section disposed therebetween. The strip is configured in first and second coil halves, with the first coil half including a plurality of windings extending from the first terminal to the intermediate section, and the second coil half including a plurality of windings extending from the second terminal to the intermediate section. The first and second coil halves are disposed coaxially, and the first and second terminals are disposed radially inwardly therefrom with the intermediate section being disposed radially outwardly therefrom.
Molecularly Imprinted Polymers with DNA Aptamer Fragments as Macromonomers.
Zhang, Zijie; Liu, Juewen
2016-03-01
Molecularly imprinted polymers (MIPs) are produced in the presence of a template molecule. After removing the template, the cavity can selectively rebind the template. MIPs are attractive functional materials with a low cost and high stability, but traditional MIPs often suffer from low binding affinity. This study employs DNA aptamer fragments as macromonomers to improve MIPs. The DNA aptamer for adenosine was first split into two halves, fluorescently labeled, and copolymerized into MIPs. With a fluorescence quenching assay, the importance of imprinting was confirmed. Further studies were carried out using isothermal titration calorimetry (ITC). Compared to the mixture of the free aptamer fragments, their MIPs doubled the binding affinity. Each free aptamer fragment alone cannot bind adenosine, whereas MIPs containing each fragment are effective binders. We further shortened one of the aptamer fragments, and the DNA length was pushed to as short as six nucleotides, yielding MIPs with a dissociation constant of 27 μM adenosine. This study provides a new method for preparing functional MIP materials by combining high-affinity biopolymer fragments with low-cost synthetic monomers, allowing higher binding affinity and providing a method for signaling binding based on DNA chemistry.
1948-06-25
fluctuation According to this method the noise figure of a radio noise, generated either in the receiving sys - receiver is a quotient of the ratio of available...Ibmaity @" nabe ) adal 38. So"&UJ Malabo u~aIS. 5 a"), il61 Sudde io o ,disturbances (OLD). 6; absorption elects. 111. &e d"a hashed ~ ~ ~ ~ ~~( Saim. kuiy1
2015-09-01
changing the weight file used without redeploying the application. 2.1 Mobile Device We used the same Sprint-brand Galaxy S3 smart phone. The... Galaxy S3 line of smart phones varied in its technical specifications depending on the carrier. For reference, the Sprint-brand Galaxy S3 has the
New Media Literacy Education (NMLE): A Developmental Approach
ERIC Educational Resources Information Center
Graber, Diana
2012-01-01
The digital world is full of both possibility and peril, with rules of engagement being hashed out as we go. While schools are still "hesitant to embrace new technologies as a backlash from the significant, and largely ineffectual, investment in classroom computers as an instructional panacea during in the mid-1990's" (Collins and Halverson 2009),…
Towards routine determination of focal mechanisms obtained from first motion P-wave arrivals
NASA Astrophysics Data System (ADS)
Lentas, K.
2018-03-01
The Bulletin of the International Seismological Centre (ISC) contains information on earthquake mechanisms collected from many different sources including national and global agencies, resulting in a satisfactory coverage over a wide magnitude range (M ˜2-9). Nevertheless, there are still a vast number of earthquakes with no reported source mechanisms especially for magnitudes up to 5. This study investigates the possibility of calculating earthquake focal mechanisms in a routine and systematic way based on P-wave first motion polarities. Any available parametric data in the ISC database is being used, as well as auto-picked polarities from waveform data up to teleseismic epicentral distances (90°) for stations that are not reported to the ISC. The determination of the earthquake mechanisms is carried out with a modified version of the HASH algorithm that is compatible with a wide range of epicentral distances and takes into account the ellipsoids defined by the ISC location errors, and the Earth's structure uncertainties. Initially, benchmark tests for a set of ISC reviewed earthquakes (mb > 4.5) are carried out and the HASH mechanism classification scheme is used to define the mechanism quality. Focal mechanisms of quality A, B and C with an azimuthal gap up to 90° compare well to the benchmark mechanisms. Nevertheless, the majority of the obtained mechanisms fall into class D as a result of limited polarity data from stations in local/regional epicentral distances. Specifically, the computation of the minimum rotation angle between the obtained mechanisms and the benchmarks, reveals that 41 per cent of the examined earthquakes show rotation angles up to 35°. Finally, the current technique is applied to a small set of earthquakes from the reviewed ISC bulletin where 62 earthquakes, with no previously reported source mechanisms, are successfully obtained.
Theoretical Calculations of Supersonic Wave Drag at Zero Lift for a Particular Store Arrangement
NASA Technical Reports Server (NTRS)
Margolis, Kenneth; Malvestuto, Frank S , Jr; Maxie, Peter J , Jr
1958-01-01
An analysis, based on the linearized thin-airfoil theory for supersonic speeds, of the wave drag at zero lift has been carried out for a simple two-body arrangement consisting of two wedgelike surfaces, each with a rhombic lateral cross section and emanating from a common apex. Such an arrangement could be used as two stores, either embedded within or mounted below a wing, or as auxiliary bodies wherein the upper halves could be used as stores and the lower halves for bomb or missile purposes. The complete range of supersonic Mach numbers has been considered and it was found that by orienting the axes of the bodies relative to each other a given volume may be redistributed in a manner which enables the wave drag to be reduced within the lower supersonic speed range (where the leading edge is substantially subsonic). At the higher Mach numbers, the wave drag is always increased. If, in addition to a constant volume, a given maximum thickness-chord ratio is imposed, then canting the two surfaces results in higher wave drag at all Mach numbers. For purposes of comparison, analogous drag calculations for the case of two parallel winglike bodies with the same cross-sectional shapes as the canted configuration have been included. Consideration is also given to the favorable (dragwise) interference pressures acting on the blunt bases of both arrangements.
NASA Astrophysics Data System (ADS)
Castro-Alvaredo, Olalla; Chen, Yixiong; Doyon, Benjamin; Hoogeveen, Marianne
2014-03-01
We evaluate the exact energy current and scaled cumulant generating function (related to the large-deviation function) in non-equilibrium steady states with energy flow, in any integrable model of relativistic quantum field theory (IQFT) with diagonal scattering. Our derivations are based on various recent results of Bernard and Doyon. The steady states are built by connecting homogeneously two infinite halves of the system thermalized at different temperatures Tl, Tr, and waiting for a long time. We evaluate the current J(Tl, Tr) using the exact QFT density matrix describing these non-equilibrium steady states and using Zamolodchikov’s method of the thermodynamic Bethe ansatz (TBA). The scaled cumulant generating function is obtained from the extended fluctuation relations which hold in integrable models. We verify our formula in particular by showing that the conformal field theory (CFT) result is obtained in the high-temperature limit. We analyze numerically our non-equilibrium steady-state TBA equations for three models: the sinh-Gordon model, the roaming trajectories model, and the sine-Gordon model at a particular reflectionless point. Based on the numerics, we conjecture that an infinite family of non-equilibrium c-functions, associated with the scaled cumulants, can be defined, which we interpret physically. We study the full scaled distribution function and find that it can be described by a set of independent Poisson processes. Finally, we show that the ‘additivity’ property of the current, which is known to hold in CFT and was proposed to hold more generally, does not hold in general IQFT—that is, J(Tl, Tr) is not of the form f(Tl) - f(Tr).
A Secure and Robust Object-Based Video Authentication System
NASA Astrophysics Data System (ADS)
He, Dajun; Sun, Qibin; Tian, Qi
2004-12-01
An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).
New security infrastructure model for distributed computing systems
NASA Astrophysics Data System (ADS)
Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.
2016-02-01
At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.
A novel sub-shot segmentation method for user-generated video
NASA Astrophysics Data System (ADS)
Lei, Zhuo; Zhang, Qian; Zheng, Chi; Qiu, Guoping
2018-04-01
With the proliferation of the user-generated videos, temporal segmentation is becoming a challengeable problem. Traditional video temporal segmentation methods like shot detection are not able to work on unedited user-generated videos, since they often only contain one single long shot. We propose a novel temporal segmentation framework for user-generated video. It finds similar frames with a tree partitioning min-Hash technique, constructs sparse temporal constrained affinity sub-graphs, and finally divides the video into sub-shot-level segments with a dense-neighbor-based clustering method. Experimental results show that our approach outperforms all the other related works. Furthermore, it is indicated that the proposed approach is able to segment user-generated videos at an average human level.
NASA Astrophysics Data System (ADS)
Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.
2008-03-01
We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.
Effects of intermittent-endurance fitness on match performance in young male soccer players.
Castagna, Carlo; Impellizzeri, Franco; Cecchini, Emilio; Rampinini, Ermanno; Alvarez, José Carlos Barbero
2009-10-01
The purpose of this study was to examine the effect of specific endurance (Yo-Yo Intermittent recovery test level 1, Yo-Yo IR1) on match performance in male youth soccer. Twenty-one young, male soccer players (age 14.1 +/- 0.2 years) were involved in the study. Players were observed during international championship games of corresponding age categories and completed the Yo-Yo IR1 on a separate occasion. Physical (distance coverage) and physiological match demands were assessed using Global Positioning System technology and heart rate (HR) short-range telemetry, respectively. During the match (two 30-minutes halves), players covered 6,204 +/- 731 m, of which 985 +/- 362 m (16%) were performed at high intensities (speed >13 kmxh, HIA). A significant decrement (3.8%, p = 0.003) in match coverage was evident during the second half. No significant (p = 0.07) difference between halves was observed for HIA (p = 0.56) and sprint (speed >18 kmxh, SPR) distances. During the first and second halves, players attained the 86 +/- 5.5 and 85 +/- 6.0% of HRmax (p = 0.17), respectively. Peak HR during the first and second halves were 100 +/- 4 and 99.4 +/- 4.7% of HRmax, respectively. Yo-Yo IR1 performance (842 +/- 352 m) was significantly related to match HIA (r = 0.77, p < 0.001) and total distance (r = 0.65, p = 0.002). This study's results showed that specific endurance, as determined by Yo-Yo IR1 performance, positively affects physical match performance in male young soccer players. Consequently, the Yo-Yo IR1 test may be regarded as a valid test to assess game readiness and guide training prescription in male youth soccer players.
ARNetMiT R Package: association rules based gene co-expression networks of miRNA targets.
Özgür Cingiz, M; Biricik, G; Diri, B
2017-03-31
miRNAs are key regulators that bind to target genes to suppress their gene expression level. The relations between miRNA-target genes enable users to derive co-expressed genes that may be involved in similar biological processes and functions in cells. We hypothesize that target genes of miRNAs are co-expressed, when they are regulated by multiple miRNAs. With the usage of these co-expressed genes, we can theoretically construct co-expression networks (GCNs) related to 152 diseases. In this study, we introduce ARNetMiT that utilize a hash based association rule algorithm in a novel way to infer the GCNs on miRNA-target genes data. We also present R package of ARNetMiT, which infers and visualizes GCNs of diseases that are selected by users. Our approach assumes miRNAs as transactions and target genes as their items. Support and confidence values are used to prune association rules on miRNA-target genes data to construct support based GCNs (sGCNs) along with support and confidence based GCNs (scGCNs). We use overlap analysis and the topological features for the performance analysis of GCNs. We also infer GCNs with popular GNI algorithms for comparison with the GCNs of ARNetMiT. Overlap analysis results show that ARNetMiT outperforms the compared GNI algorithms. We see that using high confidence values in scGCNs increase the ratio of the overlapped gene-gene interactions between the compared methods. According to the evaluation of the topological features of ARNetMiT based GCNs, the degrees of nodes have power-law distribution. The hub genes discovered by ARNetMiT based GCNs are consistent with the literature.
Das, Ashok Kumar; Goswami, Adrijit
2013-06-01
Connected health care has several applications including telecare medicine information system, personally controlled health records system, and patient monitoring. In such applications, user authentication can ensure the legality of patients. In user authentication for such applications, only the legal user/patient himself/herself is allowed to access the remote server, and no one can trace him/her according to transmitted data. Chang et al. proposed a uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care (Chang et al., J Med Syst 37:9902, 2013). Their scheme uses the user's personal biometrics along with his/her password with the help of the smart card. The user's biometrics is verified using BioHashing. Their scheme is efficient due to usage of one-way hash function and exclusive-or (XOR) operations. In this paper, we show that though their scheme is very efficient, their scheme has several security weaknesses such as (1) it has design flaws in login and authentication phases, (2) it has design flaws in password change phase, (3) it fails to protect privileged insider attack, (4) it fails to protect the man-in-the middle attack, and (5) it fails to provide proper authentication. In order to remedy these security weaknesses in Chang et al.'s scheme, we propose an improvement of their scheme while retaining the original merit of their scheme. We show that our scheme is efficient as compared to Chang et al.'s scheme. Through the security analysis, we show that our scheme is secure against possible attacks. Further, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. In addition, after successful authentication between the user and the server, they establish a secret session key shared between them for future secure communication.
Protein-protein docking using region-based 3D Zernike descriptors
2009-01-01
Background Protein-protein interactions are a pivotal component of many biological processes and mediate a variety of functions. Knowing the tertiary structure of a protein complex is therefore essential for understanding the interaction mechanism. However, experimental techniques to solve the structure of the complex are often found to be difficult. To this end, computational protein-protein docking approaches can provide a useful alternative to address this issue. Prediction of docking conformations relies on methods that effectively capture shape features of the participating proteins while giving due consideration to conformational changes that may occur. Results We present a novel protein docking algorithm based on the use of 3D Zernike descriptors as regional features of molecular shape. The key motivation of using these descriptors is their invariance to transformation, in addition to a compact representation of local surface shape characteristics. Docking decoys are generated using geometric hashing, which are then ranked by a scoring function that incorporates a buried surface area and a novel geometric complementarity term based on normals associated with the 3D Zernike shape description. Our docking algorithm was tested on both bound and unbound cases in the ZDOCK benchmark 2.0 dataset. In 74% of the bound docking predictions, our method was able to find a near-native solution (interface C-αRMSD ≤ 2.5 Å) within the top 1000 ranks. For unbound docking, among the 60 complexes for which our algorithm returned at least one hit, 60% of the cases were ranked within the top 2000. Comparison with existing shape-based docking algorithms shows that our method has a better performance than the others in unbound docking while remaining competitive for bound docking cases. Conclusion We show for the first time that the 3D Zernike descriptors are adept in capturing shape complementarity at the protein-protein interface and useful for protein docking prediction. Rigorous benchmark studies show that our docking approach has a superior performance compared to existing methods. PMID:20003235
Protein-protein docking using region-based 3D Zernike descriptors.
Venkatraman, Vishwesh; Yang, Yifeng D; Sael, Lee; Kihara, Daisuke
2009-12-09
Protein-protein interactions are a pivotal component of many biological processes and mediate a variety of functions. Knowing the tertiary structure of a protein complex is therefore essential for understanding the interaction mechanism. However, experimental techniques to solve the structure of the complex are often found to be difficult. To this end, computational protein-protein docking approaches can provide a useful alternative to address this issue. Prediction of docking conformations relies on methods that effectively capture shape features of the participating proteins while giving due consideration to conformational changes that may occur. We present a novel protein docking algorithm based on the use of 3D Zernike descriptors as regional features of molecular shape. The key motivation of using these descriptors is their invariance to transformation, in addition to a compact representation of local surface shape characteristics. Docking decoys are generated using geometric hashing, which are then ranked by a scoring function that incorporates a buried surface area and a novel geometric complementarity term based on normals associated with the 3D Zernike shape description. Our docking algorithm was tested on both bound and unbound cases in the ZDOCK benchmark 2.0 dataset. In 74% of the bound docking predictions, our method was able to find a near-native solution (interface C-alphaRMSD < or = 2.5 A) within the top 1000 ranks. For unbound docking, among the 60 complexes for which our algorithm returned at least one hit, 60% of the cases were ranked within the top 2000. Comparison with existing shape-based docking algorithms shows that our method has a better performance than the others in unbound docking while remaining competitive for bound docking cases. We show for the first time that the 3D Zernike descriptors are adept in capturing shape complementarity at the protein-protein interface and useful for protein docking prediction. Rigorous benchmark studies show that our docking approach has a superior performance compared to existing methods.
Townsend, Raymond R; Anderson, Amanda Hyre; Chirinos, Julio A; Feldman, Harold I; Grunwald, Juan E; Nessel, Lisa; Roy, Jason; Weir, Matthew R; Wright, Jackson T; Bansal, Nisha; Hsu, Chi-Yuan
2018-06-01
Patients with chronic kidney diseases (CKDs) are at risk for further loss of kidney function and death, which occur despite reasonable blood pressure treatment. To determine whether arterial stiffness influences CKD progression and death, independent of blood pressure, we conducted a prospective cohort study of CKD patients enrolled in the CRIC study (Chronic Renal Insufficiency Cohort). Using carotid-femoral pulse wave velocity (PWV), we examined the relationship between PWV and end-stage kidney disease (ESRD), ESRD or halving of estimated glomerular filtration rate, or death from any cause. The 2795 participants we enrolled had a mean age of 60 years, 56.4% were men, 47.3% had diabetes mellitus, and the average estimated glomerular filtration rate at entry was 44.4 mL/min per 1.73 m 2 During follow-up, there were 504 ESRD events, 628 ESRD or halving of estimated glomerular filtration rate events, and 394 deaths. Patients with the highest tertile of PWV (>10.3 m/s) were at higher risk for ESRD (hazard ratio [95% confidence interval], 1.37 [1.05-1.80]), ESRD or 50% decline in estimated glomerular filtration rate (hazard ratio [95% confidence interval], 1.25 [0.98-1.58]), or death (hazard ratio [95% confidence interval], 1.72 [1.24-2.38]). PWV is a significant predictor of CKD progression and death in people with impaired kidney function. Incorporation of PWV measurements may help define better the risks for these important health outcomes in patients with CKDs. Interventions that reduce aortic stiffness deserve study in people with CKD. © 2018 American Heart Association, Inc.
Englert, Markus; Beier, Hildburg
2005-01-01
Pre-tRNA splicing is an essential process in all eukaryotes. It requires the concerted action of an endonuclease to remove the intron and a ligase for joining the resulting tRNA halves as studied best in the yeast Saccharomyces cerevisiae. Here, we report the first characterization of an RNA ligase protein and its gene from a higher eukaryotic organism that is an essential component of the pre-tRNA splicing process. Purification of tRNA ligase from wheat germ by successive column chromatographic steps has identified a protein of 125 kDa by its potentiality to covalently bind AMP, and by its ability to catalyse the ligation of tRNA halves and the circularization of linear introns. Peptide sequences obtained from the purified protein led to the elucidation of the corresponding proteins and their genes in Arabidopsis and Oryza databases. The plant tRNA ligases exhibit no overall sequence homologies to any known RNA ligases, however, they harbour a number of conserved motifs that indicate the presence of three intrinsic enzyme activities: an adenylyltransferase/ligase domain in the N-terminal region, a polynucleotide kinase in the centre and a cyclic phosphodiesterase domain at the C-terminal end. In vitro expression of the recombinant Arabidopsis tRNA ligase and functional analyses revealed all expected individual activities. Plant RNA ligases are active on a variety of substrates in vitro and are capable of inter- and intramolecular RNA joining. Hence, we conclude that their role in vivo might comprise yet unknown essential functions besides their involvement in pre-tRNA splicing. PMID:15653639
Multi-factor authentication using quantum communication
Hughes, Richard John; Peterson, Charles Glen; Thrasher, James T.; Nordholt, Jane E.; Yard, Jon T.; Newell, Raymond Thorson; Somma, Rolando D.
2018-02-06
Multi-factor authentication using quantum communication ("QC") includes stages for enrollment and identification. For example, a user enrolls for multi-factor authentication that uses QC with a trusted authority. The trusted authority transmits device factor information associated with a user device (such as a hash function) and user factor information associated with the user (such as an encrypted version of a user password). The user device receives and stores the device factor information and user factor information. For multi-factor authentication that uses QC, the user device retrieves its stored device factor information and user factor information, then transmits the user factor information to the trusted authority, which also retrieves its stored device factor information. The user device and trusted authority use the device factor information and user factor information (more specifically, information such as a user password that is the basis of the user factor information) in multi-factor authentication that uses QC.
NASA Astrophysics Data System (ADS)
Fontana, Robert E.; Decad, Gary M.
2018-05-01
This paper describes trends in the storage technologies associated with Linear Tape Open (LTO) Tape cartridges, hard disk drives (HDD), and NAND Flash based storage devices including solid-state drives (SSD). This technology discussion centers on the relationship between cost/bit and bit density and, specifically on how the Moore's Law perception that areal density doubling and cost/bit halving every two years is no longer being achieved for storage based components. This observation and a Moore's Law Discussion are demonstrated with data from 9-year storage technology trends, assembled from publically available industry reporting sources.
Opportunistic Wireless Charging System Design for an On-Demand Shuttle Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meintz, Andrew; Doubleday, Kate; Markel, Tony
System right-sizing is critical to the implementation of in-motion wireless power transfer (WPT) for electric vehicles. This study evaluates potential system designs for an on-demand employee shuttle by determining the required battery size based on the rated power at a variable number of charging locations. Vehicle power and state of charge are simulated over the drive cycle, based on position and velocity data at every second from the existing shuttle. Adding just one WPT location can halve the battery size. Many configurations are capable of self-sustaining with WPT, while others benefit from supplemental stationary charging.
Successful ovarian autotransplant with no vascular reanastomosis in rats.
Barros, Flávio S V; de Oliveira, Rodrigo M; Alves, Felipe M T; Sampaio, Marcos; Geber, Selmo
2008-12-15
Preservation of ovarian functions in woman with premature ovarian failure remains an issue in reproductive medicine. Hormone replacement therapy for maintaining endocrine functions, and cryopreservation of embryos or oocytes for those who wish pregnancy, are some of the choices. However, ovarian transplantation is a more physiological alternative, although problems related to ovarian ischemia have been reported. Herein, we investigated the viability of autologous transplantation of the ovarian tissue into the rat peritoneum, without vascular reanastomosis. Twenty animals in the study group had both ovaries excised, and each ovary was dissected into two halves. A half of an ovary was autotransplanted to the peritoneal surface, closely located to the left epigastric vessels. This simple procedure does not require surgical vascular reanastomosis while it maintains appropriate follicular growth and therefore should be further considered as an alternative for women undergoing oophorectomy, not only to maintain endocrine functions but also for fertility preservation.
Global shape information increases but color information decreases the composite face effect.
Retter, Talia L; Rossion, Bruno
2015-01-01
The separation of visual shape and surface information may be useful for understanding holistic face perception--that is, the perception of a face as a single unit (Jiang, Blanz, & Rossion, 2011, Visual Cognition, 19, 1003-1034). A widely used measure of holistic face perception is the composite face effect (CFE), in which identical top face halves appear different when aligned with bottom face halves from different identities. In the present study the influences of global face shape (ie contour of the face) and color information on the CFE are investigated, with the hypothesis that global face shape supports but color impairs holistic face perception as measured in this paradigm. In experiment 1 the CFE is significantly increased when face stimuli possess natural global shape information than when cropped to a generic (ie oval) global shape; this effect is not found when the stimuli are presented inverted. In experiment 2 the CFE is significantly decreased when face stimuli are presented with color information than when presented in grayscale. These findings indicate that grayscale stimuli maintaining natural global face shape information provide the most adept measure of holistic face perception in the behavioral composite face paradigm. More generally, they show that reducing different types of information diagnostic for individual face perception can have opposite effects on the CFE, illustrating the functional dissociation between shape and surface information in face perception.
Tsigelny, Igor F.; Greenberg, Jerry; Kouznetsova, Valentina; Nigam, Sanjay K.
2009-01-01
Many major facilitator superfamily (MFS) transporters have similar 12-transmembrane α-helical topologies with two six-helix halves connected by a long loop. In humans, these transporters participate in key physiological processes and are also, as in the case of members of the organic anion transporter (OAT) family, of pharmaceutical interest. Recently, crystal structures of two bacterial representatives of the MFS family — the glycerol-3-phosphate transporter (GlpT) and lac-permease (LacY) — have been solved and, because of assumptions regarding the high structural conservation of this family, there is hope that the results can be applied to mammalian transporters as well. Based on crystallography, it has been suggested that a major conformational “switching” mechanism accounts for ligand transport by MFS proteins. This conformational switch would then allow periodic changes in the overall transporter configuration, resulting in its cyclic opening to the periplasm or cytoplasm. Following this lead, we have modeled a possible “switch” mechanism in GlpT, using the concept of rotation of protein domains as in the DynDom program17 and membranephilic constraints predicted by the MAPAS program.23 We found that the minima of energies of intersubunit interactions support two alternate positions consistent with their transport properties. Thus, for GlpT, a “tilt” of 9°–10° rotation had the most favorable energetics of electrostatic interaction between the two halves of the transporter; moreover, this confirmation was sufficient to suggest transport of the ligand across the membrane. We conducted steered molecular dynamics simulations of the GlpT-ligand system to explore how glycerol-3-phosphate would be handled by the “tilted” structure, and obtained results generally consistent with experimental mutagenesis data. While biochemical data remain most consistent with a single-site alternating access model, our results raise the possibility that, while the “rocker switch” may apply to certain MFS transporters, intermediate “tilted” states may exist under certain circumstances or as transitional structures. While wet lab experimental confirmation is required, our results suggest that transport mechanisms in this transporter family should probably not be assumed to be conserved simply based on standard structural homology considerations. Furthermore, steered molecular dynamics elucidating energetic interactions of ligands with amino acid residues in an appropriately modeled transporter may have predictive value in understanding the impact of mutations and/or polymorphisms on transporter function. PMID:18942157