Sample records for universal encoding scheme

  1. A New Quantum Gray-Scale Image Encoding Scheme

    NASA Astrophysics Data System (ADS)

    Naseri, Mosayeb; Abdolmaleky, Mona; Parandin, Fariborz; Fatahi, Negin; Farouk, Ahmed; Nazari, Reza

    2018-02-01

    In this paper, a new quantum images encoding scheme is proposed. The proposed scheme mainly consists of four different encoding algorithms. The idea behind of the scheme is a binary key generated randomly for each pixel of the original image. Afterwards, the employed encoding algorithm is selected corresponding to the qubit pair of the generated randomized binary key. The security analysis of the proposed scheme proved its enhancement through both randomization of the generated binary image key and altering the gray-scale value of the image pixels using the qubits of randomized binary key. The simulation of the proposed scheme assures that the final encoded image could not be recognized visually. Moreover, the histogram diagram of encoded image is flatter than the original one. The Shannon entropies of the final encoded images are significantly higher than the original one, which indicates that the attacker can not gain any information about the encoded images. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, IRAN

  2. Dynamically protected cat-qubits: a new paradigm for universal quantum computation

    NASA Astrophysics Data System (ADS)

    Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V.; Touzard, Steven; Schoelkopf, Robert J.; Jiang, Liang; Devoret, Michel H.

    2014-04-01

    We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner.

  3. Universal non-adiabatic holonomic quantum computation in decoherence-free subspaces with quantum dots inside a cavity

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Dong, Ping; Zhou, Jian; Cao, Zhuo-Liang

    2017-05-01

    A scheme for implementing the non-adiabatic holonomic quantum computation in decoherence-free subspaces is proposed with the interactions between a microcavity and quantum dots. A universal set of quantum gates can be constructed on the encoded logical qubits with high fidelities. The current scheme can suppress both local and collective noises, which is very important for achieving universal quantum computation. Discussions about the gate fidelities with the experimental parameters show that our schemes can be implemented in current experimental technology. Therefore, our scenario offers a method for universal and robust solid-state quantum computation.

  4. Universal quantum computation using all-optical hybrid encoding

    NASA Astrophysics Data System (ADS)

    Guo, Qi; Cheng, Liu-Yong; Wang, Hong-Fu; Zhang, Shou

    2015-04-01

    By employing displacement operations, single-photon subtractions, and weak cross-Kerr nonlinearity, we propose an alternative way of implementing several universal quantum logical gates for all-optical hybrid qubits encoded in both single-photon polarization state and coherent state. Since these schemes can be straightforwardly implemented only using local operations without teleportation procedure, therefore, less physical resources and simpler operations are required than the existing schemes. With the help of displacement operations, a large phase shift of the coherent state can be obtained via currently available tiny cross-Kerr nonlinearity. Thus, all of these schemes are nearly deterministic and feasible under current technology conditions, which makes them suitable for large-scale quantum computing. Project supported by the National Natural Science Foundation of China (Grant Nos. 61465013, 11465020, and 11264042).

  5. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture.

    PubMed

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-22

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  6. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture

    NASA Astrophysics Data System (ADS)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-01

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  7. nuID: a universal naming scheme of oligonucleotides for Illumina, Affymetrix, and other microarrays

    PubMed Central

    Du, Pan; Kibbe, Warren A; Lin, Simon M

    2007-01-01

    Background Oligonucleotide probes that are sequence identical may have different identifiers between manufacturers and even between different versions of the same company's microarray; and sometimes the same identifier is reused and represents a completely different oligonucleotide, resulting in ambiguity and potentially mis-identification of the genes hybridizing to that probe. Results We have devised a unique, non-degenerate encoding scheme that can be used as a universal representation to identify an oligonucleotide across manufacturers. We have named the encoded representation 'nuID', for nucleotide universal identifier. Inspired by the fact that the raw sequence of the oligonucleotide is the true definition of identity for a probe, the encoding algorithm uniquely and non-degenerately transforms the sequence itself into a compact identifier (a lossless compression). In addition, we added a redundancy check (checksum) to validate the integrity of the identifier. These two steps, encoding plus checksum, result in an nuID, which is a unique, non-degenerate, permanent, robust and efficient representation of the probe sequence. For commercial applications that require the sequence identity to be confidential, we have an encryption schema for nuID. We demonstrate the utility of nuIDs for the annotation of Illumina microarrays, and we believe it has universal applicability as a source-independent naming convention for oligomers. Reviewers This article was reviewed by Itai Yanai, Rong Chen (nominated by Mark Gerstein), and Gregory Schuler (nominated by David Lipman). PMID:17540033

  8. Linear optical quantum computing in a single spatial mode.

    PubMed

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  9. Hybrid quantum gates between flying photon and diamond nitrogen-vacancy centers assisted by optical microcavities

    PubMed Central

    Wei, Hai-Rui; Lu Long, Gui

    2015-01-01

    Hybrid quantum gates hold great promise for quantum information processing since they preserve the advantages of different quantum systems. Here we present compact quantum circuits to deterministically implement controlled-NOT, Toffoli, and Fredkin gates between a flying photon qubit and diamond nitrogen-vacancy (NV) centers assisted by microcavities. The target qubits of these universal quantum gates are encoded on the spins of the electrons associated with the diamond NV centers and they have long coherence time for storing information, and the control qubit is encoded on the polarizations of the flying photon and can be easily manipulated. Our quantum circuits are compact, economic, and simple. Moreover, they do not require additional qubits. The complexity of our schemes for universal three-qubit gates is much reduced, compared to the synthesis with two-qubit entangling gates. These schemes have high fidelities and efficiencies, and they are feasible in experiment. PMID:26271899

  10. Scheme for Quantum Computing Immune to Decoherence

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vatan, Farrokh

    2008-01-01

    A constructive scheme has been devised to enable mapping of any quantum computation into a spintronic circuit in which the computation is encoded in a basis that is, in principle, immune to quantum decoherence. The scheme is implemented by an algorithm that utilizes multiple physical spins to encode each logical bit in such a way that collective errors affecting all the physical spins do not disturb the logical bit. The scheme is expected to be of use to experimenters working on spintronic implementations of quantum logic. Spintronic computing devices use quantum-mechanical spins (typically, electron spins) to encode logical bits. Bits thus encoded (denoted qubits) are potentially susceptible to errors caused by noise and decoherence. The traditional model of quantum computation is based partly on the assumption that each qubit is implemented by use of a single two-state quantum system, such as an electron or other spin-1.2 particle. It can be surprisingly difficult to achieve certain gate operations . most notably, those of arbitrary 1-qubit gates . in spintronic hardware according to this model. However, ironically, certain 2-qubit interactions (in particular, spin-spin exchange interactions) can be achieved relatively easily in spintronic hardware. Therefore, it would be fortunate if it were possible to implement any 1-qubit gate by use of a spin-spin exchange interaction. While such a direct representation is not possible, it is possible to achieve an arbitrary 1-qubit gate indirectly by means of a sequence of four spin-spin exchange interactions, which could be implemented by use of four exchange gates. Accordingly, the present scheme provides for mapping any 1-qubit gate in the logical basis into an equivalent sequence of at most four spin-spin exchange interactions in the physical (encoded) basis. The complexity of the mathematical derivation of the scheme from basic quantum principles precludes a description within this article; it must suffice to report that the derivation provides explicit constructions for finding the exchange couplings in the physical basis needed to implement any arbitrary 1-qubit gate. These constructions lead to spintronic encodings of quantum logic that are more efficient than those of a previously published scheme that utilizes a universal but fixed set of gates.

  11. Long-distance quantum communication over noisy networks without long-time quantum memory

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Łodyga, Justyna; Pankowski, Łukasz; PrzysieŻna, Anna

    2014-12-01

    The problem of sharing entanglement over large distances is crucial for implementations of quantum cryptography. A possible scheme for long-distance entanglement sharing and quantum communication exploits networks whose nodes share Einstein-Podolsky-Rosen (EPR) pairs. In Perseguers et al. [Phys. Rev. A 78, 062324 (2008), 10.1103/PhysRevA.78.062324] the authors put forward an important isomorphism between storing quantum information in a dimension D and transmission of quantum information in a D +1 -dimensional network. We show that it is possible to obtain long-distance entanglement in a noisy two-dimensional (2D) network, even when taking into account that encoding and decoding of a state is exposed to an error. For 3D networks we propose a simple encoding and decoding scheme based solely on syndrome measurements on 2D Kitaev topological quantum memory. Our procedure constitutes an alternative scheme of state injection that can be used for universal quantum computation on 2D Kitaev code. It is shown that the encoding scheme is equivalent to teleporting the state, from a specific node into a whole two-dimensional network, through some virtual EPR pair existing within the rest of network qubits. We present an analytic lower bound on fidelity of the encoding and decoding procedure, using as our main tool a modified metric on space-time lattice, deviating from a taxicab metric at the first and the last time slices.

  12. Dynamical generation of noiseless quantum subsystems

    PubMed

    Viola; Knill; Lloyd

    2000-10-16

    We combine dynamical decoupling and universal control methods for open quantum systems with coding procedures. By exploiting a general algebraic approach, we show how appropriate encodings of quantum states result in obtaining universal control over dynamically generated noise-protected subsystems with limited control resources. In particular, we provide a constructive scheme based on two-body Hamiltonians for performing universal quantum computation over large noiseless spaces which can be engineered in the presence of arbitrary linear quantum noise.

  13. Judges' Agreement and Disagreement Patterns When Encoding Verbal Protocols.

    ERIC Educational Resources Information Center

    Schael, Jocelyne; Dionne, Jean-Paul

    The basis of agreement or disagreement among judges/evaluators when applying a coding scheme to concurrent verbal protocols was studied. The sample included 20 university graduates, from varied backgrounds; 10 subjects had and 10 subjects did not have experience in protocol analysis. The total sample was divided into four balanced groups according…

  14. From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.

    PubMed

    Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry

    2015-07-10

    Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.

  15. Experiments in encoding multilevel images as quadtrees

    NASA Technical Reports Server (NTRS)

    Lansing, Donald L.

    1987-01-01

    Image storage requirements for several encoding methods are investigated and the use of quadtrees with multigray level or multicolor images are explored. The results of encoding a variety of images having up to 256 gray levels using three schemes (full raster, runlength and quadtree) are presented. Although there is considerable literature on the use of quadtrees to store and manipulate binary images, their application to multilevel images is relatively undeveloped. The potential advantage of quadtree encoding is that an entire area with a uniform gray level may be encoded as a unit. A pointerless quadtree encoding scheme is described. Data are presented on the size of the quadtree required to encode selected images and on the relative storage requirements of the three encoding schemes. A segmentation scheme based on the statistical variation of gray levels within a quadtree quadrant is described. This parametric scheme may be used to control the storage required by an encoded image and to preprocess a scene for feature identification. Several sets of black and white and pseudocolor images obtained by varying the segmentation parameter are shown.

  16. The Text Encoding Initiative: Flexible and Extensible Document Encoding.

    ERIC Educational Resources Information Center

    Barnard, David T.; Ide, Nancy M.

    1997-01-01

    The Text Encoding Initiative (TEI), an international collaboration aimed at producing a common encoding scheme for complex texts, examines the requirement for generality versus the requirement to handle specialized text types. Discusses how documents and users tax the limits of fixed schemes requiring flexible extensible encoding to support…

  17. Scalable implementation of boson sampling with trapped ions.

    PubMed

    Shen, C; Zhang, Z; Duan, L-M

    2014-02-07

    Boson sampling solves a classically intractable problem by sampling from a probability distribution given by matrix permanents. We propose a scalable implementation of boson sampling using local transverse phonon modes of trapped ions to encode the bosons. The proposed scheme allows deterministic preparation and high-efficiency readout of the bosons in the Fock states and universal mode mixing. With the state-of-the-art trapped ion technology, it is feasible to realize boson sampling with tens of bosons by this scheme, which would outperform the most powerful classical computers and constitute an effective disproof of the famous extended Church-Turing thesis.

  18. GlycomeDB – integration of open-access carbohydrate structure databases

    PubMed Central

    Ranzinger, René; Herget, Stephan; Wetter, Thomas; von der Lieth, Claus-Wilhelm

    2008-01-01

    Background Although carbohydrates are the third major class of biological macromolecules, after proteins and DNA, there is neither a comprehensive database for carbohydrate structures nor an established universal structure encoding scheme for computational purposes. Funding for further development of the Complex Carbohydrate Structure Database (CCSD or CarbBank) ceased in 1997, and since then several initiatives have developed independent databases with partially overlapping foci. For each database, different encoding schemes for residues and sequence topology were designed. Therefore, it is virtually impossible to obtain an overview of all deposited structures or to compare the contents of the various databases. Results We have implemented procedures which download the structures contained in the seven major databases, e.g. GLYCOSCIENCES.de, the Consortium for Functional Glycomics (CFG), the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the Bacterial Carbohydrate Structure Database (BCSDB). We have created a new database called GlycomeDB, containing all structures, their taxonomic annotations and references (IDs) for the original databases. More than 100000 datasets were imported, resulting in more than 33000 unique sequences now encoded in GlycomeDB using the universal format GlycoCT. Inconsistencies were found in all public databases, which were discussed and corrected in multiple feedback rounds with the responsible curators. Conclusion GlycomeDB is a new, publicly available database for carbohydrate sequences with a unified, all-encompassing structure encoding format and NCBI taxonomic referencing. The database is updated weekly and can be downloaded free of charge. The JAVA application GlycoUpdateDB is also available for establishing and updating a local installation of GlycomeDB. With the advent of GlycomeDB, the distributed islands of knowledge in glycomics are now bridged to form a single resource. PMID:18803830

  19. Universal fault-tolerant quantum computation with only transversal gates and error correction.

    PubMed

    Paetznick, Adam; Reichardt, Ben W

    2013-08-30

    Transversal implementations of encoded unitary gates are highly desirable for fault-tolerant quantum computation. Though transversal gates alone cannot be computationally universal, they can be combined with specially distilled resource states in order to achieve universality. We show that "triorthogonal" stabilizer codes, introduced for state distillation by Bravyi and Haah [Phys. Rev. A 86, 052329 (2012)], admit transversal implementation of the controlled-controlled-Z gate. We then construct a universal set of fault-tolerant gates without state distillation by using only transversal controlled-controlled-Z, transversal Hadamard, and fault-tolerant error correction. We also adapt the distillation procedure of Bravyi and Haah to Toffoli gates, improving on existing Toffoli distillation schemes.

  20. A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2013-02-01

    Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sciarrino, Fabio; Dipartimento di Fisica and Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, Universita 'La Sapienza', Rome 00185; De Martini, Francesco

    The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarizationmore » encoded qubits are discussed.« less

  2. Tag-KEM from Set Partial Domain One-Way Permutations

    NASA Astrophysics Data System (ADS)

    Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru

    Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.

  3. Towards biological plausibility of electronic noses: A spiking neural network based approach for tea odour classification.

    PubMed

    Sarkar, Sankho Turjo; Bhondekar, Amol P; Macaš, Martin; Kumar, Ritesh; Kaur, Rishemjit; Sharma, Anupma; Gulati, Ashu; Kumar, Amod

    2015-11-01

    The paper presents a novel encoding scheme for neuronal code generation for odour recognition using an electronic nose (EN). This scheme is based on channel encoding using multiple Gaussian receptive fields superimposed over the temporal EN responses. The encoded data is further applied to a spiking neural network (SNN) for pattern classification. Two forms of SNN, a back-propagation based SpikeProp and a dynamic evolving SNN are used to learn the encoded responses. The effects of information encoding on the performance of SNNs have been investigated. Statistical tests have been performed to determine the contribution of the SNN and the encoding scheme to overall odour discrimination. The approach has been implemented in odour classification of orthodox black tea (Kangra-Himachal Pradesh Region) thereby demonstrating a biomimetic approach for EN data analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Efficacy of Distortion Correction on Diffusion Imaging: Comparison of FSL Eddy and Eddy_Correct Using 30 and 60 Directions Diffusion Encoding

    PubMed Central

    Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki

    2014-01-01

    Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme. PMID:25405472

  5. Efficacy of distortion correction on diffusion imaging: comparison of FSL eddy and eddy_correct using 30 and 60 directions diffusion encoding.

    PubMed

    Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki

    2014-01-01

    Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme.

  6. Non-adiabatic holonomic quantum computation in linear system-bath coupling

    PubMed Central

    Sun, Chunfang; Wang, Gangcheng; Wu, Chunfeng; Liu, Haodi; Feng, Xun-Li; Chen, Jing-Ling; Xue, Kang

    2016-01-01

    Non-adiabatic holonomic quantum computation in decoherence-free subspaces protects quantum information from control imprecisions and decoherence. For the non-collective decoherence that each qubit has its own bath, we show the implementations of two non-commutable holonomic single-qubit gates and one holonomic nontrivial two-qubit gate that compose a universal set of non-adiabatic holonomic quantum gates in decoherence-free-subspaces of the decoupling group, with an encoding rate of . The proposed scheme is robust against control imprecisions and the non-collective decoherence, and its non-adiabatic property ensures less operation time. We demonstrate that our proposed scheme can be realized by utilizing only two-qubit interactions rather than many-qubit interactions. Our results reduce the complexity of practical implementation of holonomic quantum computation in experiments. We also discuss the physical implementation of our scheme in coupled microcavities. PMID:26846444

  7. Non-adiabatic holonomic quantum computation in linear system-bath coupling.

    PubMed

    Sun, Chunfang; Wang, Gangcheng; Wu, Chunfeng; Liu, Haodi; Feng, Xun-Li; Chen, Jing-Ling; Xue, Kang

    2016-02-05

    Non-adiabatic holonomic quantum computation in decoherence-free subspaces protects quantum information from control imprecisions and decoherence. For the non-collective decoherence that each qubit has its own bath, we show the implementations of two non-commutable holonomic single-qubit gates and one holonomic nontrivial two-qubit gate that compose a universal set of non-adiabatic holonomic quantum gates in decoherence-free-subspaces of the decoupling group, with an encoding rate of (N - 2)/N. The proposed scheme is robust against control imprecisions and the non-collective decoherence, and its non-adiabatic property ensures less operation time. We demonstrate that our proposed scheme can be realized by utilizing only two-qubit interactions rather than many-qubit interactions. Our results reduce the complexity of practical implementation of holonomic quantum computation in experiments. We also discuss the physical implementation of our scheme in coupled microcavities.

  8. Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states

    NASA Astrophysics Data System (ADS)

    Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.

    2018-04-01

    We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.

  9. Plasmonic Encoding

    DTIC Science & Technology

    2014-10-06

    The nanosheets, like many SERS platforms, are ideally suited for encoding schemes based on the SERS signal from a variety of thiolated small...counterfeiting purposes. The nanosheets, like many SERS platforms, are ideally suited for encoding schemes based on the SERS signal from a variety of...environments ( like the surface of human hair). 2. Nanoflares In 2007, we first introduced the concept of nanoflares. Nanoflares are a new class of

  10. Toward Improving Quality of End-of-Life Care: Encoding Clinical Guidelines and Standing Orders Using the Omaha System.

    PubMed

    Slipka, Allison F; Monsen, Karen A

    2018-02-01

    End-of-life care (EOLC) relieves the suffering of millions of people around the globe each year. A growing body of hospice care research has led to the creation of several evidence-based clinical guidelines for EOLC. As evidence for the effectiveness of timely EOLC swells, so does the increased need for efficient information exchange between disciplines and across the care continuum. The purpose of this study was to investigate the feasibility of using the Omaha System as a framework for encoding interoperable evidence-based EOL interventions with specified temporality for use across disciplines and settings. Four evidence-based clinical guidelines and one current set of hospice standing orders were encoded using the Omaha System Problem Classification Scheme and Intervention Scheme, as well as Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT). The resulting encoded guideline was entered on a Microsoft Excel spreadsheet and made available for public use on the Omaha System Guidelines website. The resulting EOLC guideline consisted of 153 interventions that may enable patients and their surrogates, clinicians, and ancillary providers to communicate interventions in a universally comprehensible way. Evidence-based interventions from diverse disciplines involved in EOLC are described within this guideline using the Omaha System. Because the Omaha System and clinical guidelines are maintained in the public domain, encoding interventions is achievable by anyone with access to the Internet and basic Excel skills. Using the guideline as a documentation template customized for unique patient needs, clinicians can quantify and track patient care across the care continuum to ensure timely evidence-based interventions. Clinical guidelines coded in the Omaha System can support the use of multidisciplinary evidence-based interventions to improve quality of EOLC across settings and professions. © 2017 Sigma Theta Tau International.

  11. Trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.

    2000-09-01

    The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  12. One-step trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.

    2000-11-01

    The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  13. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    NASA Astrophysics Data System (ADS)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  14. Cavity QED implementation of non-adiabatic holonomies for universal quantum gates in decoherence-free subspaces with nitrogen-vacancy centers.

    PubMed

    Zhou, Jian; Yu, Wei-Can; Gao, Yu-Mei; Xue, Zheng-Yuan

    2015-06-01

    A cavity QED implementation of the non-adiabatic holonomic quantum computation in decoherence-free subspaces is proposed with nitrogen-vacancy centers coupled commonly to the whispering-gallery mode of a microsphere cavity, where a universal set of quantum gates can be realized on the qubits. In our implementation, with the assistant of the appropriate driving fields, the quantum evolution is insensitive to the cavity field state, which is only virtually excited. The implemented non-adiabatic holonomies, utilizing optical transitions in the Λ type of three-level configuration of the nitrogen-vacancy centers, can be used to construct a universal set of quantum gates on the encoded logical qubits. Therefore, our scheme opens up the possibility of realizing universal holonomic quantum computation with cavity assisted interaction on solid-state spins characterized by long coherence times.

  15. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  16. Unconditionally secure multi-party quantum commitment scheme

    NASA Astrophysics Data System (ADS)

    Wang, Ming-Qiang; Wang, Xue; Zhan, Tao

    2018-02-01

    A new unconditionally secure multi-party quantum commitment is proposed in this paper by encoding the committed message to the phase of a quantum state. Multi-party means that there are more than one recipient in our scheme. We show that our quantum commitment scheme is unconditional hiding and binding, and hiding is perfect. Our technique is based on the interference of phase-encoded coherent states of light. Its security proof relies on the no-cloning theorem of quantum theory and the properties of quantum information.

  17. Error rates and resource overheads of encoded three-qubit gates

    NASA Astrophysics Data System (ADS)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sciarrino, Fabio; De Martini, Francesco

    In several quantum information (QI) phenomena of large technological importance the information is carried by the phase of the quantum superposition states, or qubits. The phase-covariant cloning machine (PQCM) addresses precisely the problem of optimally copying these qubits with the largest attainable 'fidelity'. We present a general scheme which realizes the 1{yields}3 phase covariant cloning process by a combination of three different QI processes: the universal cloning, the NOT gate, and the projection over the symmetric subspace of the output qubits. The experimental implementation of a PQCM for polarization encoded qubits, the first ever realized with photons, is reported.

  19. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    ERIC Educational Resources Information Center

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  20. Fault-Tolerant Coding for State Machines

    NASA Technical Reports Server (NTRS)

    Naegle, Stephanie Taft; Burke, Gary; Newell, Michael

    2008-01-01

    Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.

  1. The symmetric MSD encoder for one-step adder of ternary optical computer

    NASA Astrophysics Data System (ADS)

    Kai, Song; LiPing, Yan

    2016-08-01

    The symmetric Modified Signed-Digit (MSD) encoding is important for achieving the one-step MSD adder of Ternary Optical Computer (TOC). The paper described the symmetric MSD encoding algorithm in detail, and developed its truth table which has nine rows and nine columns. According to the truth table, the state table was developed, and the optical-path structure and circuit-implementation scheme of the symmetric MSD encoder (SME) for one-step adder of TOC were proposed. Finally, a series of experiments were designed and performed. The observed results of the experiments showed that the scheme to implement SME was correct, feasible and efficient.

  2. Encoding and decoding of digital spiral imaging based on bidirectional transformation of light's spatial eigenmodes.

    PubMed

    Zhang, Wuhong; Chen, Lixiang

    2016-06-15

    Digital spiral imaging has been demonstrated as an effective optical tool to encode optical information and retrieve topographic information of an object. Here we develop a conceptually new and concise scheme for optical image encoding and decoding toward free-space digital spiral imaging. We experimentally demonstrate that the optical lattices with ℓ=±50 orbital angular momentum superpositions and a clover image with nearly 200 Laguerre-Gaussian (LG) modes can be well encoded and successfully decoded. It is found that an image encoded/decoded with a two-index LG spectrum (considering both azimuthal and radial indices, ℓ and p) possesses much higher fidelity than that with a one-index LG spectrum (only considering the ℓ index). Our work provides an alternative tool for the image encoding/decoding scheme toward free-space optical communications.

  3. A channel differential EZW coding scheme for EEG data compression.

    PubMed

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  4. The effect of interference on delta modulation encoded video signals

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1979-01-01

    The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.

  5. Double dissociation of value computations in orbitofrontal and anterior cingulate neurons

    PubMed Central

    Kennerley, Steven W.; Behrens, Timothy E. J.; Wallis, Jonathan D.

    2011-01-01

    Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters. PMID:22037498

  6. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  7. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  8. Perfect quantum multiple-unicast network coding protocol

    NASA Astrophysics Data System (ADS)

    Li, Dan-Dan; Gao, Fei; Qin, Su-Juan; Wen, Qiao-Yan

    2018-01-01

    In order to realize long-distance and large-scale quantum communication, it is natural to utilize quantum repeater. For a general quantum multiple-unicast network, it is still puzzling how to complete communication tasks perfectly with less resources such as registers. In this paper, we solve this problem. By applying quantum repeaters to multiple-unicast communication problem, we give encoding-decoding schemes for source nodes, internal ones and target ones, respectively. Source-target nodes share EPR pairs by using our encoding-decoding schemes over quantum multiple-unicast network. Furthermore, quantum communication can be accomplished perfectly via teleportation. Compared with existed schemes, our schemes can reduce resource consumption and realize long-distance transmission of quantum information.

  9. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  10. Wavelet-based Encoding Scheme for Controlling Size of Compressed ECG Segments in Telecardiology Systems.

    PubMed

    Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben

    2017-09-12

    One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.

  11. A Universal 3D Voxel Descriptor for Solid-State Material Informatics with Deep Convolutional Neural Networks.

    PubMed

    Kajita, Seiji; Ohba, Nobuko; Jinnouchi, Ryosuke; Asahi, Ryoji

    2017-12-05

    Material informatics (MI) is a promising approach to liberate us from the time-consuming Edisonian (trial and error) process for material discoveries, driven by machine-learning algorithms. Several descriptors, which are encoded material features to feed computers, were proposed in the last few decades. Especially to solid systems, however, their insufficient representations of three dimensionality of field quantities such as electron distributions and local potentials have critically hindered broad and practical successes of the solid-state MI. We develop a simple, generic 3D voxel descriptor that compacts any field quantities, in such a suitable way to implement convolutional neural networks (CNNs). We examine the 3D voxel descriptor encoded from the electron distribution by a regression test with 680 oxides data. The present scheme outperforms other existing descriptors in the prediction of Hartree energies that are significantly relevant to the long-wavelength distribution of the valence electrons. The results indicate that this scheme can forecast any functionals of field quantities just by learning sufficient amount of data, if there is an explicit correlation between the target properties and field quantities. This 3D descriptor opens a way to import prominent CNNs-based algorithms of supervised, semi-supervised and reinforcement learnings into the solid-state MI.

  12. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    PubMed

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  13. Data-dependent bucketing improves reference-free compression of sequencing reads.

    PubMed

    Patro, Rob; Kingsford, Carl

    2015-09-01

    The storage and transmission of high-throughput sequencing data consumes significant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo compression schemes. Mince is written in C++11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/∼ckingsf/software/mince. carlk@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  14. Demonstration of an optoelectronic interconnect architecture for a parallel modified signed-digit adder and subtracter

    NASA Astrophysics Data System (ADS)

    Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.

    1996-06-01

    A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.

  15. Investigation of television transmission using adaptive delta modulation principles

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1976-01-01

    The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.

  16. Teleportation of a two-mode entangled coherent state encoded with two-qubit information

    NASA Astrophysics Data System (ADS)

    Mishra, Manoj K.; Prakash, Hari

    2010-09-01

    We propose a scheme to teleport a two-mode entangled coherent state encoded with two-qubit information, which is better than the two schemes recently proposed by Liao and Kuang (2007 J. Phys. B: At. Mol. Opt. Phys. 40 1183) and by Phien and Nguyen (2008 Phys. Lett. A 372 2825) in that our scheme gives higher value of minimum assured fidelity and minimum average fidelity without using any nonlinear interactions. For involved coherent states | ± αrang, minimum average fidelity in our case is >=0.99 for |α| >= 1.6 (i.e. |α|2 >= 2.6), while previously proposed schemes referred above report the same for |α| >= 5 (i.e. |α|2 >= 25). Since it is very challenging to produce superposed coherent states of high coherent amplitude (|α|), our teleportation scheme is at the reach of modern technology.

  17. Hybrid architecture for encoded measurement-based quantum computation

    PubMed Central

    Zwerger, M.; Briegel, H. J.; Dür, W.

    2014-01-01

    We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906

  18. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    PubMed

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  19. Semi-counterfactual cryptography

    NASA Astrophysics Data System (ADS)

    Akshata Shenoy, H.; Srikanth, R.; Srinivas, T.

    2013-09-01

    In counterfactual quantum key distribution (QKD), two remote parties can securely share random polarization-encoded bits through the blocking rather than the transmission of particles. We propose a semi-counterfactual QKD, i.e., one where the secret bit is shared, and also encoded, based on the blocking or non-blocking of a particle. The scheme is thus semi-counterfactual and not based on polarization encoding. As with other counterfactual schemes and the Goldenberg-Vaidman protocol, but unlike BB84, the encoding states are orthogonal and security arises ultimately from single-particle non-locality. Unlike any of them, however, the secret bit generated is maximally indeterminate until the joint action of Alice and Bob. We prove the general security of the protocol, and study the most general photon-number-preserving incoherent attack in detail.

  20. All-optical OFDM network coding scheme for all-optical virtual private communication in PON

    NASA Astrophysics Data System (ADS)

    Li, Lijun; Gu, Rentao; Ji, Yuefeng; Bai, Lin; Huang, Zhitong

    2014-03-01

    A novel optical orthogonal frequency division multiplexing (OFDM) network coding scheme is proposed over passive optical network (PON) system. The proposed scheme for all-optical virtual private network (VPN) does not only improve transmission efficiency, but also realize full-duplex communication mode in a single fiber. Compared with the traditional all-optical VPN architectures, the all-optical OFDM network coding scheme can support higher speed, more flexible bandwidth allocation, and higher spectrum efficiency. In order to reduce the difficulty of alignment for encoding operation between inter-communication traffic, the width of OFDM subcarrier pulse is stretched in our proposed scheme. The feasibility of all-optical OFDM network coding scheme for VPN is verified, and the relevant simulation results show that the full-duplex inter-communication traffic stream can be transmitted successfully. Furthermore, the tolerance of misalignment existing in inter-ONUs traffic is investigated and analyzed for all-optical encoding operation, and the difficulty of pulse alignment is proved to be lower.

  1. QKD using polarization encoding with active measurement basis selection

    NASA Astrophysics Data System (ADS)

    Duplinskiy, A.; Ustimchik, V.; Kanapin, A.; Kurochkin, Y.

    2017-11-01

    We report a proof-of-principle quantum key distribution experiment using a one-way optical scheme with polarization encoding implementing the BB84 protocol. LiNbO3 phase modulators are used for generating polarization states for Alice and active basis selection for Bob. This allows the former to use a single laser source, while the latter needs only two single-photon detectors. The presented optical scheme is simple and consists of standard fiber components. Calibration algorithm for three polarization controllers used in the scheme has been developed. The experiment was carried with 10 MHz repetition frequency laser pulses over a distance of 50 km of standard telecom optical fiber.

  2. Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.

    DTIC Science & Technology

    1987-07-01

    detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication

  3. Nonadiabatic holonomic quantum computation in decoherence-free subspaces.

    PubMed

    Xu, G F; Zhang, J; Tong, D M; Sjöqvist, Erik; Kwek, L C

    2012-10-26

    Quantum computation that combines the coherence stabilization virtues of decoherence-free subspaces and the fault tolerance of geometric holonomic control is of great practical importance. Some schemes of adiabatic holonomic quantum computation in decoherence-free subspaces have been proposed in the past few years. However, nonadiabatic holonomic quantum computation in decoherence-free subspaces, which avoids a long run-time requirement but with all the robust advantages, remains an open problem. Here, we demonstrate how to realize nonadiabatic holonomic quantum computation in decoherence-free subspaces. By using only three neighboring physical qubits undergoing collective dephasing to encode one logical qubit, we realize a universal set of quantum gates.

  4. Threshold multi-secret sharing scheme based on phase-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Shi, Zhengang

    2017-03-01

    A threshold multi-secret sharing scheme is proposed based on phase-shifting interferometry. The K secret images to be shared are firstly encoded by using Fourier transformation, respectively. Then, these encoded images are shared into many shadow images based on recording principle of the phase-shifting interferometry. In the recovering stage, the secret images can be restored by combining any 2 K + 1 or more shadow images, while any 2 K or fewer shadow images cannot obtain any information about the secret images. As a result, a (2 K + 1 , N) threshold multi-secret sharing scheme can be implemented. Simulation results are presented to demonstrate the feasibility of the proposed method.

  5. A new detection scheme for ultrafast 2D J-resolved spectroscopy

    NASA Astrophysics Data System (ADS)

    Giraudeau, Patrick; Akoka, Serge

    2007-06-01

    Recent ultrafast techniques enable 2D NMR spectra to be obtained in a single scan. A modification of the detection scheme involved in this technique is proposed, permitting the achievement of 2D 1H J-resolved spectra in 500 ms. The detection gradient echoes are substituted by spin echoes to obtain spectra where the coupling constants are encoded along the direct ν2 domain. The use of this new J-resolved detection block after continuous phase-encoding excitation schemes is discussed in terms of resolution and sensitivity. J-resolved spectra obtained on cinnamic acid and 3-ethyl bromopropionate are presented, revealing the expected 2D J-patterns with coupling constants as small as 2 Hz.

  6. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  7. New scene change control scheme based on pseudoskipped picture

    NASA Astrophysics Data System (ADS)

    Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.

    1997-01-01

    A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.

  8. Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.

    PubMed

    Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian

    2002-01-01

    Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.

  9. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  10. Hamming and Accumulator Codes Concatenated with MPSK or QAM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel

    2009-01-01

    In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.

  11. An Innovative Infrastructure with a Universal Geo-Spatiotemporal Data Representation Supporting Cost-Effective Integration of Diverse Earth Science Data

    NASA Technical Reports Server (NTRS)

    Rilee, Michael Lee; Kuo, Kwo-Sen

    2017-01-01

    The SpatioTemporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions into integer operations, e.g. conditional sub-setting, taking into account representative spatiotemporal resolutions of the data in the datasets. STARE geo-spatiotemporally aligns data placements of diverse data on massive parallel resources to maximize performance. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain-specific questions instead of expending their efforts and expertise on data processing. With STARE-enabled automation, SciDB (Scientific Database) plus STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of interoperable data, and easing result sharing. Using SciDB plus STARE as part of an integrated analysis infrastructure dramatically eases combining diametrically different datasets.

  12. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.

  13. A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding

    NASA Astrophysics Data System (ADS)

    Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui

    2015-05-01

    Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.

  14. New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram

    Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less

  15. Majorana-Based Fermionic Quantum Computation.

    PubMed

    O'Brien, T E; Rożek, P; Akhmerov, A R

    2018-06-01

    Because Majorana zero modes store quantum information nonlocally, they are protected from noise, and have been proposed as a building block for a quantum computer. We show how to use the same protection from noise to implement universal fermionic quantum computation. Our architecture requires only two Majorana modes to encode a fermionic quantum degree of freedom, compared to alternative implementations which require a minimum of four Majorana modes for a spin quantum degree of freedom. The fermionic degrees of freedom support both unitary coupled cluster variational quantum eigensolver and quantum phase estimation algorithms, proposed for quantum chemistry simulations. Because we avoid the Jordan-Wigner transformation, our scheme has a lower overhead for implementing both of these algorithms, allowing for simulation of the Trotterized Hubbard Hamiltonian in O(1) time per unitary step. We finally demonstrate magic state distillation in our fermionic architecture, giving a universal set of topologically protected fermionic quantum gates.

  16. Majorana-Based Fermionic Quantum Computation

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; RoŻek, P.; Akhmerov, A. R.

    2018-06-01

    Because Majorana zero modes store quantum information nonlocally, they are protected from noise, and have been proposed as a building block for a quantum computer. We show how to use the same protection from noise to implement universal fermionic quantum computation. Our architecture requires only two Majorana modes to encode a fermionic quantum degree of freedom, compared to alternative implementations which require a minimum of four Majorana modes for a spin quantum degree of freedom. The fermionic degrees of freedom support both unitary coupled cluster variational quantum eigensolver and quantum phase estimation algorithms, proposed for quantum chemistry simulations. Because we avoid the Jordan-Wigner transformation, our scheme has a lower overhead for implementing both of these algorithms, allowing for simulation of the Trotterized Hubbard Hamiltonian in O (1 ) time per unitary step. We finally demonstrate magic state distillation in our fermionic architecture, giving a universal set of topologically protected fermionic quantum gates.

  17. An efficient (t,n) threshold quantum secret sharing without entanglement

    NASA Astrophysics Data System (ADS)

    Qin, Huawang; Dai, Yuewei

    2016-04-01

    An efficient (t,n) threshold quantum secret sharing (QSS) scheme is proposed. In our scheme, the Hash function is used to check the eavesdropping, and no particles need to be published. So the utilization efficiency of the particles is real 100%. No entanglement is used in our scheme. The dealer uses the single particles to encode the secret information, and the participants get the secret through measuring the single particles. Compared to the existing schemes, our scheme is simpler and more efficient.

  18. Fast, optically controlled Kerr phase shifter for digital signal processing.

    PubMed

    Li, R B; Deng, L; Hagley, E W; Payne, M G; Bienfang, J C; Levine, Z H

    2013-05-01

    We demonstrate an optically controlled Kerr phase shifter using a room-temperature 85Rb vapor operating in a Raman gain scheme. Phase shifts from zero to π relative to an unshifted reference wave are observed, and gated operations are demonstrated. We further demonstrate the versatile digital manipulation of encoded signal light with an encoded phase-control light field using an unbalanced Mach-Zehnder interferometer. Generalizations of this scheme should be capable of full manipulation of a digitized signal field at high speed, opening the door to future applications.

  19. Comparison of compression efficiency between HEVC/H.265 and VP9 based on subjective assessments

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Ebrahimi, Touradj

    2014-09-01

    Current increasing effort of broadcast providers to transmit UHD (Ultra High Definition) content is likely to increase demand for ultra high definition televisions (UHDTVs). To compress UHDTV content, several alternative encoding mechanisms exist. In addition to internationally recognized standards, open access proprietary options, such as VP9 video encoding scheme, have recently appeared and are gaining popularity. One of the main goals of these encoders is to efficiently compress video sequences beyond HDTV resolution for various scenarios, such as broadcasting or internet streaming. In this paper, a broadcast scenario rate-distortion performance analysis and mutual comparison of one of the latest video coding standards H.265/HEVC with recently released proprietary video coding scheme VP9 is presented. Also, currently one of the most popular and widely spread encoder H.264/AVC has been included into the evaluation to serve as a comparison baseline. The comparison is performed by means of subjective evaluations showing actual differences between encoding algorithms in terms of perceived quality. The results indicate a general dominance of HEVC based encoding algorithm in comparison to other alternatives, while VP9 and AVC showing similar performance.

  20. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    NASA Astrophysics Data System (ADS)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  1. Entanglement distribution schemes employing coherent photon-to-spin conversion in semiconductor quantum dot circuits

    NASA Astrophysics Data System (ADS)

    Gaudreau, Louis; Bogan, Alex; Korkusinski, Marek; Studenikin, Sergei; Austing, D. Guy; Sachrajda, Andrew S.

    2017-09-01

    Long distance entanglement distribution is an important problem for quantum information technologies to solve. Current optical schemes are known to have fundamental limitations. A coherent photon-to-spin interface built with quantum dots (QDs) in a direct bandgap semiconductor can provide a solution for efficient entanglement distribution. QD circuits offer integrated spin processing for full Bell state measurement (BSM) analysis and spin quantum memory. Crucially the photo-generated spins can be heralded by non-destructive charge detection techniques. We review current schemes to transfer a polarization-encoded state or a time-bin-encoded state of a photon to the state of a spin in a QD. The spin may be that of an electron or that of a hole. We describe adaptations of the original schemes to employ heavy holes which have a number of attractive properties including a g-factor that is tunable to zero for QDs in an appropriately oriented external magnetic field. We also introduce simple throughput scaling models to demonstrate the potential performance advantage of full BSM capability in a QD scheme, even when the quantum memory is imperfect, over optical schemes relying on linear optical elements and ensemble quantum memories.

  2. Quantum-secret-sharing scheme based on local distinguishability of orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-02-01

    In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k

  3. Quantum Communication without Alignment using Multiple-Qubit Single-Photon States

    NASA Astrophysics Data System (ADS)

    Aolita, L.; Walborn, S. P.

    2007-03-01

    We propose a scheme for encoding logical qubits in a subspace protected against collective rotations around the propagation axis using the polarization and transverse spatial degrees of freedom of single photons. This encoding allows for quantum key distribution without the need of a shared reference frame. We present methods to generate entangled states of two logical qubits using present day down-conversion sources and linear optics, and show that the application of these entangled logical states to quantum information schemes allows for alignment-free tests of Bell’s inequalities, quantum dense coding, and quantum teleportation.

  4. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes.

    PubMed

    Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Friston, Karl

    2015-10-01

    Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a "limited offer" game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. © The Author 2014. Published by Oxford University Press.

  5. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  6. Network-based Arbitrated Quantum Signature Scheme with Graph State

    NASA Astrophysics Data System (ADS)

    Ma, Hongling; Li, Fei; Mao, Ningyi; Wang, Yijun; Guo, Ying

    2017-08-01

    Implementing an arbitrated quantum signature(QAS) through complex networks is an interesting cryptography technology in the literature. In this paper, we propose an arbitrated quantum signature for the multi-user-involved networks, whose topological structures are established by the encoded graph state. The determinative transmission of the shared keys, is enabled by the appropriate stabilizers performed on the graph state. The implementation of this scheme depends on the deterministic distribution of the multi-user-shared graph state on which the encoded message can be processed in signing and verifying phases. There are four parties involved, the signatory Alice, the verifier Bob, the arbitrator Trent and Dealer who assists the legal participants in the signature generation and verification. The security is guaranteed by the entanglement of the encoded graph state which is cooperatively prepared by legal participants in complex quantum networks.

  7. Low-Density Parity-Check Code Design Techniques to Simplify Encoding

    NASA Astrophysics Data System (ADS)

    Perez, J. M.; Andrews, K.

    2007-11-01

    This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.

  8. Radio astronomy Explorer B antenna aspect processor

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Novello, J.; Reeves, C. C.

    1972-01-01

    The antenna aspect system used on the Radio Astronomy Explorer B spacecraft is described. This system consists of two facsimile cameras, a data encoder, and a data processor. Emphasis is placed on the discussion of the data processor, which contains a data compressor and a source encoder. With this compression scheme a compression ratio of 8 is achieved on a typical line of camera data. These compressed data are then convolutionally encoded.

  9. Two-dimensional imaging in a lightweight portable MRI scanner without gradient coils.

    PubMed

    Cooley, Clarissa Zimmerman; Stockmann, Jason P; Armstrong, Brandon D; Sarracanie, Mathieu; Lev, Michael H; Rosen, Matthew S; Wald, Lawrence L

    2015-02-01

    As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as intensive care units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. We construct and validate a truly portable (<100 kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating spatial encoding magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed two-dimensional (2D) image. Multiple receive channels are used to disambiguate the nonbijective encoding field. The system is validated with experimental images of 2D test phantoms. Similar to other nonlinear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. © 2014 Wiley Periodicals, Inc.

  10. 2D Imaging in a Lightweight Portable MRI Scanner without Gradient Coils

    PubMed Central

    Cooley, Clarissa Zimmerman; Stockmann, Jason P.; Armstrong, Brandon D.; Sarracanie, Mathieu; Lev, Michael H.; Rosen, Matthew S.; Wald, Lawrence L.

    2014-01-01

    Purpose As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as Intensive Care Units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. Methods We construct and validate a truly portable (<100kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating Spatial Encoding Magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed 2D image. Multiple receive channels are used to disambiguate the non-bijective encoding field. Results The system is validated with experimental images of 2D test phantoms. Similar to other non-linear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. Conclusion The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. PMID:24668520

  11. Optical delay encoding for fast timing and detector signal multiplexing in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, Alexander M.; Levin, Craig S., E-mail: cslevin@stanford.edu; Molecular Imaging Program at Stanford

    2015-08-15

    Purpose: The large number of detector channels in modern positron emission tomography (PET) scanners poses a challenge in terms of readout electronics complexity. Multiplexing schemes are typically implemented to reduce the number of physical readout channels, but often result in performance degradation. Novel methods of multiplexing in PET must be developed to avoid this data degradation. The preservation of fast timing information is especially important for time-of-flight PET. Methods: A new multiplexing scheme based on encoding detector interaction events with a series of extremely fast overlapping optical pulses with precise delays is demonstrated in this work. Encoding events in thismore » way potentially allows many detector channels to be simultaneously encoded onto a single optical fiber that is then read out by a single digitizer. A two channel silicon photomultiplier-based prototype utilizing this optical delay encoding technique along with dual threshold time-over-threshold is demonstrated. Results: The optical encoding and multiplexing prototype achieves a coincidence time resolution of 160 ps full width at half maximum (FWHM) and an energy resolution of 13.1% FWHM at 511 keV with 3 × 3 × 5 mm{sup 3} LYSO crystals. All interaction information for both detectors, including timing, energy, and channel identification, is encoded onto a single optical fiber with little degradation. Conclusions: Optical delay encoding and multiplexing technology could lead to time-of-flight PET scanners with fewer readout channels and simplified data acquisition systems.« less

  12. Efficient quantum transmission in multiple-source networks.

    PubMed

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-04-02

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.

  13. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  14. Security enhanced BioEncoding for protecting iris codes

    NASA Astrophysics Data System (ADS)

    Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya

    2011-06-01

    Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.

  15. International Metadata Initiatives: Lessons in Bibliographic Control.

    ERIC Educational Resources Information Center

    Caplan, Priscilla

    This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…

  16. A Comprehensive Three-Dimensional Cortical Map of Vowel Space

    ERIC Educational Resources Information Center

    Scharinger, Mathias; Idsardi, William J.; Poe, Samantha

    2011-01-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space…

  17. SEMG signal compression based on two-dimensional techniques.

    PubMed

    de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino

    2016-04-18

    Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.

  18. Integration of quantum key distribution and private classical communication through continuous variable

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping

    2017-12-01

    In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.

  19. A 256-channel, high throughput and precision time-to-digital converter with a decomposition encoding scheme in a Kintex-7 FPGA

    NASA Astrophysics Data System (ADS)

    Song, Z.; Wang, Y.; Kuang, J.

    2018-05-01

    Field Programmable Gate Arrays (FPGAs) made with 28 nm and more advanced process technology have great potentials for implementation of high precision time-to-digital convertors (TDC), because the delay cells in the tapped delay line (TDL) used for time interpolation are getting smaller and smaller. However, the bubble problems in the TDL status are becoming more complicated, which make it difficult to achieve TDCs on these chips with a high time precision. In this paper, we are proposing a novel decomposition encoding scheme, which not only can solve the bubble problem easily, but also has a high encoding efficiency. The potential of these chips to realize TDC can be fully released with the scheme. In a Xilinx Kintex-7 FPGA chip, we implemented a TDC system with 256 TDC channels, which doubles the number of TDC channels that our previous technique could achieve. Performances of all these TDC channels are evaluated. The average RMS time precision among them is 10.23 ps in the time-interval measurement range of (0–10 ns), and their measurement throughput reaches 277 M measures per second.

  20. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  1. Distribution of hybrid entanglement and hyperentanglement with time-bin for secure quantum channel under noise via weak cross-Kerr nonlinearity.

    PubMed

    Heo, Jino; Kang, Min-Sung; Hong, Chang-Ho; Yang, Hyung-Jin; Choi, Seong-Gon; Hong, Jong-Phil

    2017-08-31

    We design schemes to generate and distribute hybrid entanglement and hyperentanglement correlated with degrees of freedom (polarization and time-bin) via weak cross-Kerr nonlinearities (XKNLs) and linear optical devices (including time-bin encoders). In our scheme, the multi-photon gates (which consist of XKNLs, quantum bus [qubus] beams, and photon-number-resolving [PNR] measurement) with time-bin encoders can generate hyperentanglement or hybrid entanglement. And we can also purify the entangled state (polarization) of two photons using only linear optical devices and time-bin encoders under a noisy (bit-flip) channel. Subsequently, through local operations (using a multi-photon gate via XKNLs) and classical communications, it is possible to generate a four-qubit hybrid entangled state (polarization and time-bin). Finally, we discuss how the multi-photon gate using XKNLs, qubus beams, and PNR measurement can be reliably performed under the decoherence effect.

  2. A quantum approach to homomorphic encryption

    PubMed Central

    Tan, Si-Hui; Kettlewell, Joshua A.; Ouyang, Yingkai; Chen, Lin; Fitzsimons, Joseph F.

    2016-01-01

    Encryption schemes often derive their power from the properties of the underlying algebra on the symbols used. Inspired by group theoretic tools, we use the centralizer of a subgroup of operations to present a private-key quantum homomorphic encryption scheme that enables a broad class of quantum computation on encrypted data. The quantum data is encoded on bosons of distinct species in distinct spatial modes, and the quantum computations are manipulations of these bosons in a manner independent of their species. A particular instance of our encoding hides up to a constant fraction of the information encrypted. This fraction can be made arbitrarily close to unity with overhead scaling only polynomially in the message length. This highlights the potential of our protocol to hide a non-trivial amount of information, and is suggestive of a large class of encodings that might yield better security. PMID:27658349

  3. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGES

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  4. Schemes for efficient transmission of encoded video streams on high-speed networks

    NASA Astrophysics Data System (ADS)

    Ramanathan, Srinivas; Vin, Harrick M.; Rangan, P. Venkat

    1994-04-01

    In this paper, we argue that significant performance benefits can accrue if integrated networks implement application-specific mechanisms that account for the diversities in media compression schemes. Towards this end, we propose a simple, yet effective, strategy called Frame Induced Packet Discarding (FIPD), in which, upon detection of loss of a threshold number (determined by an application's video encoding scheme) of packets belonging to a video frame, the network attempts to discard all the remaining packets of that frame. In order to analytically quantify the performance of FIPD so as to obtain fractional frame losses that can be guaranteed to video channels, we develop a finite state, discrete time markov chain model of the FIPD strategy. The fractional frame loss thus computed can serve as the criterion for admission control at the network. Performance evaluations demonstrate the utility of the FIPD strategy.

  5. Encoding Schemes For A Digital Optical Multiplier Using The Modified Signed-Digit Number Representation

    NASA Astrophysics Data System (ADS)

    Lasher, Mark E.; Henderson, Thomas B.; Drake, Barry L.; Bocker, Richard P.

    1986-09-01

    The modified signed-digit (MSD) number representation offers full parallel, carry-free addition. A MSD adder has been described by the authors. This paper describes how the adder can be used in a tree structure to implement an optical multiply algorithm. Three different optical schemes, involving position, polarization, and intensity encoding, are proposed for realizing the trinary logic system. When configured in the generic multiplier architecture, these schemes yield the combinatorial logic necessary to carry out the multiplication algorithm. The optical systems are essentially three dimensional arrangements composed of modular units. Of course, this modularity is important for design considerations, while the parallelism and noninterfering communication channels of optical systems are important from the standpoint of reduced complexity. The authors have also designed electronic hardware to demonstrate and model the combinatorial logic required to carry out the algorithm. The electronic and proposed optical systems will be compared in terms of complexity and speed.

  6. Efficient Quantum Transmission in Multiple-Source Networks

    PubMed Central

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-01-01

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency. PMID:24691590

  7. Optical flip-flops in a polarization-encoded optical shadow-casting scheme.

    PubMed

    Rizvi, R A; Zubairy, M S

    1994-06-10

    We propose a novel scheme that optically implements various types of binary sequential logic elements. This is based on a polarization-encoded optical shadow-casting system. The proposed system architecture is capable of implementing synchronous as well as asynchronous sequential circuits owing to the inherent structural flexibility of optical shadow casting. By employing the proposed system, we present the design and implementation schemes of a J-K flip-flop and clocked R-S and D latches. The main feature of these flip-flops is that the propagation of the signal from the input plane to the output (i.e., processing) and from the output plane to the source plane (i.e., feedback) is all optical. Consequently the efficiency of these elements in terms of speed is increased. The only electronic part in the system is the detection of the outputs and the switching of the source plane.

  8. Cryptanalysis and improvement of an optical image encryption scheme using a chaotic Baker map and double random phase encoding

    NASA Astrophysics Data System (ADS)

    Chen, Jun-Xin; Zhu, Zhi-Liang; Fu, Chong; Zhang, Li-Bo; Zhang, Yushu

    2014-12-01

    In this paper, we evaluate the security of an enhanced double random phase encoding (DRPE) image encryption scheme (2013 J. Lightwave Technol. 31 2533). The original system employs a chaotic Baker map prior to DRPE to provide more protection to the plain image and hence promote the security level of DRPE, as claimed. However, cryptanalysis shows that this scheme is vulnerable to a chosen-plaintext attack, and the ciphertext can be precisely recovered. The corresponding improvement is subsequently reported upon the basic premise that no extra equipment or computational complexity is required. The simulation results and security analyses prove its effectiveness and security. The proposed achievements are suitable for all cryptosystems under permutation and, following that, the DRPE architecture, and we hope that our work can motivate the further research on optical image encryption.

  9. Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.

    PubMed

    Majumder, Saikat; Verma, Shrish

    2015-01-01

    Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.

  10. Complementary-encoding holographic associative memory using a photorefractive crystal

    NASA Astrophysics Data System (ADS)

    Yuan, ShiFu; Wu, Minxian; Yan, Yingbai; Jin, Guofan

    1996-06-01

    We present a holographic implementation of accurate associative memory with only one holographic memory system. In the implementation, the stored and test images are coded by using complementary-encoding method. The recalled complete image is also a coded image that can be decoded with a decoding mask to get an original image or its complement image. The experiment shows that the complementary encoding can efficiently increase the addressing accuracy in a simple way. Instead of the above complementary-encoding method, a scheme that uses complementary area-encoding method is also proposed for the holographic implementation of gray-level image associative memory with accurate addressing.

  11. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  12. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  13. Knowledge and Implementation of Tertiary Institutions' Social Health Insurance Programme (TISHIP) in Nigeria: a case study of Nnamdi Azikiwe University, Awka.

    PubMed

    Anetoh, Maureen Ugonwa; Jibuaku, Chiamaka Henrietta; Nduka, Sunday Odunke; Uzodinma, Samuel Uchenna

    2017-01-01

    Tertiary Institutions' Social Health Insurance Programme (TISHIP) is an arm of the National Health Insurance Scheme (NHIS), which provides quality healthcare to students in Nigerian higher institutions. The success of this scheme depends on the students' knowledge and awareness of its existence as well as the level of its implementation by healthcare providers. This study was therefore designed to assess students' knowledge and attitude towards TISHIP and its implementation level among health workers in Nnamdi Azikiwe University Medical Centre. Using a stratified random sampling technique, 420 undergraduate students of Nnamdi Azikiwe University, Awka were assessed on their level of awareness and general assessment of TISHIP through an adapted and validated questionnaire instrument. The level of implementation of the scheme was then assessed among 50 randomly selected staff of the University Medical Center. Data collected were analyzed using Statistical Package for Social Sciences (SPSS) version 20 software. Whereas the students in general, showed a high level of TISHIP awareness, more than half of them (56.3%) have never benefited from the scheme with 52.8% showing dissatisfaction with the quality of care offered with the scheme. However, an overwhelming number of the students (87.9%) opined that the scheme should continue. On the other hand, the University Medical Centre staff responses showed a satisfactory scheme implementation. The study found satisfactory TISHIP awareness with poor attitude among Nnamdi Azikiwe University students. Furthermore, the University Medical Centre health workers showed a strong commitment to the objectives of the scheme.

  14. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  15. Using concatenated quantum codes for universal fault-tolerant quantum gates.

    PubMed

    Jochym-O'Connor, Tomas; Laflamme, Raymond

    2014-01-10

    We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.

  16. Quark enables semi-reference-based compression of RNA-seq data.

    PubMed

    Sarkar, Hirak; Patro, Rob

    2017-11-01

    The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Programmable Pulse-Position-Modulation Encoder

    NASA Technical Reports Server (NTRS)

    Zhu, David; Farr, William

    2006-01-01

    A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.

  18. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  19. Experimental circular quantum secret sharing over telecom fiber network.

    PubMed

    Wei, Ke-Jin; Ma, Hai-Qiang; Yang, Jian-Hui

    2013-07-15

    We present a robust single photon circular quantum secret sharing (QSS) scheme with phase encoding over 50 km single mode fiber network using a circular QSS protocol. Our scheme can automatically provide a perfect compensation of birefringence and remain stable for a long time. A high visibility of 99.3% is obtained. Furthermore, our scheme realizes a polarization insensitive phase modulators. The visibility of this system can be maintained perpetually without any adjustment to the system every time we test the system.

  20. A novel design of optical CDMA system based on TCM and FFH

    NASA Astrophysics Data System (ADS)

    Fang, Jun-Bin; Xu, Zhi-Hai; Huang, Hong-bin; Zheng, Liming; Chen, Shun-er; Liu, Wei-ping

    2005-02-01

    For the application in Passive Optical Network (PON), a novel design of OCDMA system scheme is proposed in this paper. There are two key components included in this scheme: a new kind of OCDMA encoder/decoder system based on TCM and FFH and an improved Optical Line Terminal (OLT) receiving system with improved anti-interference performance by the use of Long Period Fiber Grating (LPFG). In the encoder/decoder system, Trellis Coded Modulation (TCM) encoder is applied in front of the FFH modulator. Original signal firstly is encoded through TCM encoder, and then the redundant code out of the TCM encoder will be mapped into one of the FFH modulation signal subsets for transmission. On the receiver (decoder) side, transmitting signal is demodulated through FFH and decoded by trellis decoder. Owing to the fact that high coding gain can be acquired by TCM without adding transmitting band and reducing transmitting speed, TCM is utilized to ameliorate bit error performance and reduce multi-user interference. In the OLT receiving system, EDFA and LPFG are placed in front of decoder to get excellent gain flatness on a large bandwidth, and Optical Hard Limiter (OHL) is also deployed to improve detection performance, through which the anti-interference performance of receiving system can be greatly enhanced. At the same time, some software is used to simulate the system performance for further analysis and authentication. The related work in this paper provides a valuable reference to the research.

  1. Optimal attacks on qubit-based Quantum Key Recycling

    NASA Astrophysics Data System (ADS)

    Leermakers, Daan; Škorić, Boris

    2018-03-01

    Quantum Key Recycling (QKR) is a quantum cryptographic primitive that allows one to reuse keys in an unconditionally secure way. By removing the need to repeatedly generate new keys, it improves communication efficiency. Škorić and de Vries recently proposed a QKR scheme based on 8-state encoding (four bases). It does not require quantum computers for encryption/decryption but only single-qubit operations. We provide a missing ingredient in the security analysis of this scheme in the case of noisy channels: accurate upper bounds on the required amount of privacy amplification. We determine optimal attacks against the message and against the key, for 8-state encoding as well as 4-state and 6-state conjugate coding. We provide results in terms of min-entropy loss as well as accessible (Shannon) information. We show that the Shannon entropy analysis for 8-state encoding reduces to the analysis of quantum key distribution, whereas 4-state and 6-state suffer from additional leaks that make them less effective. From the optimal attacks we compute the required amount of privacy amplification and hence the achievable communication rate (useful information per qubit) of qubit-based QKR. Overall, 8-state encoding yields the highest communication rates.

  2. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  3. A new encoding scheme for visible light communications with applications to mobile connections

    NASA Astrophysics Data System (ADS)

    Benton, David M.; St. John Brittan, Paul

    2017-10-01

    A new, novel and unconventional encoding scheme called concurrent coding, has recently been demonstrated and shown to offer interesting features and benefits in comparison to conventional techniques, such as robustness against burst errors and improved efficiency of transmitted power. Free space optical communications can suffer particularly from issues of alignment which requires stable, fixed links to be established and beam wander which can interrupt communications. Concurrent coding has the potential to help ease these difficulties and enable mobile, flexible optical communications to be implemented through the use of a source encoding technique. This concept has been applied for the first time to optical communications where standard light emitting diodes (LEDs) have been used to transmit information encoded with concurrent coding. The technique successfully transmits and decodes data despite unpredictable interruptions to the transmission causing significant drop-outs to the detected signal. The technique also shows how it is possible to send a single block of data in isolation with no pre-synchronisation required between transmitter and receiver, and no specific synchronisation sequence appended to the transmission. Such systems are robust against interference - intentional or otherwise - as well as intermittent beam blockage.

  4. Quantum annealing correction with minor embedding

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  5. Improved opponent color local binary patterns: an effective local image descriptor for color texture classification

    NASA Astrophysics Data System (ADS)

    Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo

    2018-01-01

    Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.

  6. A novel chaotic image encryption scheme using DNA sequence operations

    NASA Astrophysics Data System (ADS)

    Wang, Xing-Yuan; Zhang, Ying-Qian; Bao, Xue-Mei

    2015-10-01

    In this paper, we propose a novel image encryption scheme based on DNA (Deoxyribonucleic acid) sequence operations and chaotic system. Firstly, we perform bitwise exclusive OR operation on the pixels of the plain image using the pseudorandom sequences produced by the spatiotemporal chaos system, i.e., CML (coupled map lattice). Secondly, a DNA matrix is obtained by encoding the confused image using a kind of DNA encoding rule. Then we generate the new initial conditions of the CML according to this DNA matrix and the previous initial conditions, which can make the encryption result closely depend on every pixel of the plain image. Thirdly, the rows and columns of the DNA matrix are permuted. Then, the permuted DNA matrix is confused once again. At last, after decoding the confused DNA matrix using a kind of DNA decoding rule, we obtain the ciphered image. Experimental results and theoretical analysis show that the scheme is able to resist various attacks, so it has extraordinarily high security.

  7. Achievable Strength-Based Signal Detection in Quantity-Constrained PAM OOK Concentration-Encoded Molecular Communication.

    PubMed

    Mahfuz, Mohammad Upal

    2016-10-01

    In this paper, the expressions of achievable strength-based detection probabilities of concentration-encoded molecular communication (CEMC) system have been derived based on finite pulsewidth (FP) pulse-amplitude modulated (PAM) on-off keying (OOK) modulation scheme and strength threshold. An FP-PAM system is characterized by its duty cycle α that indicates the fraction of the entire symbol duration the transmitter remains on and transmits the signal. Results show that the detection performance of an FP-PAM OOK CEMC system significantly depends on the statistical distribution parameters of diffusion-based propagation noise and intersymbol interference (ISI). Analytical detection performance of an FP-PAM OOK CEMC system under ISI scenario has been explained and compared based on receiver operating characteristics (ROC) for impulse (i.e., spike)-modulated (IM) and FP-PAM CEMC schemes. It is shown that the effects of diffusion noise and ISI on ROC can be explained separately based on their communication range-dependent statistics. With full duty cycle, an FP-PAM scheme provides significantly worse performance than an IM scheme. The paper also analyzes the performance of the system when duty cycle, transmission data rate, and quantity of molecules vary.

  8. Unified quantum no-go theorems and transforming of quantum pure states in a restricted set

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun

    2017-12-01

    The linear superposition principle in quantum mechanics is essential for several no-go theorems such as the no-cloning theorem, the no-deleting theorem and the no-superposing theorem. In this paper, we investigate general quantum transformations forbidden or permitted by the superposition principle for various goals. First, we prove a no-encoding theorem that forbids linearly superposing of an unknown pure state and a fixed pure state in Hilbert space of a finite dimension. The new theorem is further extended for multiple copies of an unknown state as input states. These generalized results of the no-encoding theorem include the no-cloning theorem, the no-deleting theorem and the no-superposing theorem as special cases. Second, we provide a unified scheme for presenting perfect and imperfect quantum tasks (cloning and deleting) in a one-shot manner. This scheme may lead to fruitful results that are completely characterized with the linear independence of the representative vectors of input pure states. The upper bounds of the efficiency are also proved. Third, we generalize a recent superposing scheme of unknown states with a fixed overlap into new schemes when multiple copies of an unknown state are as input states.

  9. AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING

    PubMed Central

    Sharif, Behzad; Bresler, Yoram

    2013-01-01

    We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159

  10. Efficient Polar Coding of Quantum Information

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato

    2012-08-01

    Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.

  11. Implementation of ternary Shor’s algorithm based on vibrational states of an ion in anharmonic potential

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing

    2015-03-01

    It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.

  12. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  13. Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yuan, Yuan; Xue, Yang

    2016-06-01

    With the rapid development of mobile devices and pervasive computing technologies, acceleration-based human activity recognition, a difficult yet essential problem in mobile apps, has received intensive attention recently. Different acceleration signals for representing different activities or even a same activity have different attributes, which causes troubles in normalizing the signals. We thus cannot directly compare these signals with each other, because they are embedded in a nonmetric space. Therefore, we present a nonmetric scheme that retains discriminative and robust frequency domain information by developing a novel ensemble manifold rank preserving (EMRP) algorithm. EMRP simultaneously considers three aspects: 1) it encodes the local geometry using the ranking order information of intraclass samples distributed on local patches; 2) it keeps the discriminative information by maximizing the margin between samples of different classes; and 3) it finds the optimal linear combination of the alignment matrices to approximate the intrinsic manifold lied in the data. Experiments are conducted on the South China University of Technology naturalistic 3-D acceleration-based activity dataset and the naturalistic mobile-devices based human activity dataset to demonstrate the robustness and effectiveness of the new nonmetric scheme for acceleration-based human activity recognition.

  14. The Emergent Universe scheme and tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labraña, Pedro

    We present an alternative scheme for an Emergent Universe scenario, developed previously in Phys. Rev. D 86, 083524 (2012), where the universe is initially in a static state supported by a scalar field located in a false vacuum. The universe begins to evolve when, by quantum tunneling, the scalar field decays into a state of true vacuum. The Emergent Universe models are interesting since they provide specific examples of non-singular inflationary universes.

  15. Performance Evaluation of UHF Fading Satellite Channel by Simulation for Different Modulation Schemes

    DTIC Science & Technology

    1992-12-01

    views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval

  16. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1992-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.

  17. Decreased Risk of Preeclampsia After the Introduction of Universal Voucher Scheme for Antenatal Care and Birth Services in the Republic of Korea.

    PubMed

    Choe, Seung-Ah; Min, Hye Sook; Cho, Sung-Il

    2017-01-01

    Objectives A number of interventions to reduce disparities in maternal health have been introduced and implemented without concrete evidence to support them. In Korea, a universal voucher scheme for antenatal care and birth services was initiated in December 2008 to improve Korea's fertility rate. This study explores the risk of preeclampsia after the introduction of a universal voucher scheme. Methods Population-based cohort data from the National Health Insurance Service-National Sample Cohort (NHIS-NSC) covering 2002-2013 were analysed. A generalized linear mixed model (GLMM) was used to estimate the relationship between the risk of preeclampsia and voucher scheme introduction. Results The annual age-adjusted incidence of preeclampsia showed no significant unidirectional change during the study period. In the GLMM analysis, the introduction of a voucher scheme was associated with a reduced risk of preeclampsia, controlling for potential confounding factors. The interaction between household income level and voucher scheme was not significant. Conclusions for Practice This finding suggests that the introduction of a voucher scheme for mothers is related to a reduced risk of preeclampsia even under universal health coverage.

  18. Phased array ghost elimination.

    PubMed

    Kellman, Peter; McVeigh, Elliot R

    2006-05-01

    Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.

  19. Phased array ghost elimination

    PubMed Central

    Kellman, Peter; McVeigh, Elliot R.

    2007-01-01

    Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636

  20. Quantum secret sharing using orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Liu, Cheng-Ji; Li, Yong-Ming

    2017-12-01

    In this work, we investigate the distinguishability of orthogonal multiqudit entangled states under restricted local operations and classical communication. According to these properties, we propose a quantum secret sharing scheme to realize three types of access structures, i.e., the ( n, n)-threshold, the restricted (3, n)-threshold and restricted (4, n)-threshold schemes (called LOCC-QSS scheme). All cooperating players in the restricted threshold schemes are from two disjoint groups. In the proposed protocol, the participants use the computational basis measurement and classical communication to distinguish between those orthogonal states and reconstruct the original secret. Furthermore, we also analyze the security of our scheme in four primary quantum attacks and give a simple encoding method in order to better prevent the participant conspiracy attack.

  1. The Social Effects of the Australian Higher Education Contribution Scheme (HECS)

    ERIC Educational Resources Information Center

    Marks, Gary Neil

    2009-01-01

    Australia's Higher Education Contribution Scheme (HECS) is an income contingent loan scheme, in which university students pay back part of the costs of their tuition after their post-university income reaches a certain threshold, is an important policy innovation for the financing of higher education. However, its critics claim that HECS increases…

  2. Experimental Studies on a Compact Storage Scheme for Wavelet-based Multiresolution Subregion Retrieval

    NASA Technical Reports Server (NTRS)

    Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.

    1996-01-01

    Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.

  3. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  4. Electromechanical quantum simulators

    NASA Astrophysics Data System (ADS)

    Tacchino, F.; Chiesa, A.; LaHaye, M. D.; Carretta, S.; Gerace, D.

    2018-06-01

    Digital quantum simulators are among the most appealing applications of a quantum computer. Here we propose a universal, scalable, and integrated quantum computing platform based on tunable nonlinear electromechanical nano-oscillators. It is shown that very high operational fidelities for single- and two-qubits gates can be achieved in a minimal architecture, where qubits are encoded in the anharmonic vibrational modes of mechanical nanoresonators, whose effective coupling is mediated by virtual fluctuations of an intermediate superconducting artificial atom. An effective scheme to induce large single-phonon nonlinearities in nanoelectromechanical devices is explicitly discussed, thus opening the route to experimental investigation in this direction. Finally, we explicitly show the very high fidelities that can be reached for the digital quantum simulation of model Hamiltonians, by using realistic experimental parameters in state-of-the-art devices, and considering the transverse field Ising model as a paradigmatic example.

  5. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.

    PubMed

    Lau, Hoi-Kwan; Plenio, Martin B

    2016-09-02

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  6. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding

    NASA Astrophysics Data System (ADS)

    Lau, Hoi-Kwan; Plenio, Martin B.

    2016-09-01

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  7. Improving both imaging speed and spatial resolution in MR-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Hall, Walter A.; Truwit, Charles L.

    2002-05-01

    A robust near real-time MRI based surgical guidance scheme has been developed and used in neurosurgical procedure performed in our combined 1.5 Tesla MR operating room. Because of the increased susceptibility difference in the area of surgical site during surgery, the preferred real- time imaging technique is a single shot imaging sequence based on the concept of the half acquisition with turbo spin echoes (HASTE). In order to maintain sufficient spatial resolution for visualizing the surgical devices, such as a biopsy needle and catheter, we used focused field of view (FOV) in the phase-encoding (PE) direction coupled with an out-volume signal suppression (OVS) technique. The key concept of the method is to minimize the total number of the required phase encoding steps and the effective echo time (TE) as well as the longest TE for the high spatial encoding step. The concept has been first demonstrated with a phantom experiment, which showed when the water was doped with Gd- DTPA to match the relaxation rates of the brain tissue there was a significant spatial blurring primarily along the phase encoding direction if the conventional HASTE technique, and the new scheme indeed minimized the spatial blur in the resulting image and improved the needle visualization as anticipated. Using the new scheme in a typical MR-guided neurobiopsy procedure, the brain biopsy needle was easily seen against the tissue background with minimal blurring due the inevitable T2 signal decay even when the PE direction was set parallel to the needle axis. This MR based guidance technique has practically allowed neurosurgeons to visualize the biopsy needle and to monitor its insertion with a better certainty at near real-time pace.

  8. Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gorjala, Bhargavi

    1991-01-01

    Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.

  9. Improving transmission efficiency of large sequence alignment/map (SAM) files.

    PubMed

    Sakib, Muhammad Nazmus; Tang, Jijun; Zheng, W Jim; Huang, Chin-Tser

    2011-01-01

    Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.

  10. Continuous-variable quantum network coding for coherent states

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Li, Ke; Liu, Jian-wei

    2017-04-01

    As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.

  11. Matroids and quantum-secret-sharing schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarvepalli, Pradeep; Raussendorf, Robert

    A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previousmore » work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.« less

  12. New User Support in the University Network with DACS Scheme

    ERIC Educational Resources Information Center

    Odagiri, Kazuya; Yaegashi, Rihito; Tadauchi, Masaharu; Ishii, Naohiro

    2007-01-01

    Purpose: The purpose of this paper is to propose and examine the new user support in university network. Design/methodology/approach: The new user support is realized by use of DACS (Destination Addressing Control System) Scheme which manages a whole network system through communication control on a client computer. This DACS Scheme has been…

  13. A High-Performance Lossless Compression Scheme for EEG Signals Using Wavelet Transform and Neural Network Predictors

    PubMed Central

    Sriraam, N.

    2012-01-01

    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238

  14. A high-performance lossless compression scheme for EEG signals using wavelet transform and neural network predictors.

    PubMed

    Sriraam, N

    2012-01-01

    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.

  15. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  16. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  17. Fully refocused multi-shot spatiotemporally encoded MRI: robust imaging in the presence of metallic implants.

    PubMed

    Ben-Eliezer, Noam; Solomon, Eddy; Harel, Elad; Nevo, Nava; Frydman, Lucio

    2012-12-01

    An approach has been recently introduced for acquiring arbitrary 2D NMR spectra or images in a single scan, based on the use of frequency-swept RF pulses for the sequential excitation and acquisition of the spins response. This spatiotemporal-encoding (SPEN) approach enables a unique, voxel-by-voxel refocusing of all frequency shifts in the sample, for all instants throughout the data acquisition. The present study investigates the use of this full-refocusing aspect of SPEN-based imaging in the multi-shot MRI of objects, subject to sizable field inhomogeneities that complicate conventional imaging approaches. 2D MRI experiments were performed at 7 T on phantoms and on mice in vivo, focusing on imaging in proximity to metallic objects. Fully refocused SPEN-based spin echo imaging sequences were implemented, using both Cartesian and back-projection trajectories, and compared with k-space encoded spin echo imaging schemes collected on identical samples under equal bandwidths and acquisition timing conditions. In all cases assayed, the fully refocused spatiotemporally encoded experiments evidenced a ca. 50 % reduction in signal dephasing in the proximity of the metal, as compared to analogous results stemming from the k-space encoded spin echo counterparts. The results in this study suggest that SPEN-based acquisition schemes carry the potential to overcome strong field inhomogeneities, of the kind that currently preclude high-field, high-resolution tissue characterizations in the neighborhood of metallic implants.

  18. SpikeTemp: An Enhanced Rank-Order-Based Learning Approach for Spiking Neural Networks With Adaptive Structure.

    PubMed

    Wang, Jinling; Belatreche, Ammar; Maguire, Liam P; McGinnity, Thomas Martin

    2017-01-01

    This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.

  19. Hardware-efficient fermionic simulation with a cavity-QED system

    NASA Astrophysics Data System (ADS)

    Zhu, Guanyu; Subaşı, Yiǧit; Whitfield, James D.; Hafezi, Mohammad

    2018-03-01

    In digital quantum simulation of fermionic models with qubits, non-local maps for encoding are often encountered. Such maps require linear or logarithmic overhead in circuit depth which could render the simulation useless, for a given decoherence time. Here we show how one can use a cavity-QED system to perform digital quantum simulation of fermionic models. In particular, we show that highly nonlocal Jordan-Wigner or Bravyi-Kitaev transformations can be efficiently implemented through a hardware approach. The key idea is using ancilla cavity modes, which are dispersively coupled to a qubit string, to collectively manipulate and measure qubit states. Our scheme reduces the circuit depth in each Trotter step of the Jordan-Wigner encoding by a factor of N2, comparing to the scheme for a device with only local connectivity, where N is the number of orbitals for a generic two-body Hamiltonian. Additional analysis for the Fermi-Hubbard model on an N × N square lattice results in a similar reduction. We also discuss a detailed implementation of our scheme with superconducting qubits and cavities.

  20. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  1. Communication channels secured from eavesdropping via transmission of photonic Bell states

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    1999-07-01

    This paper proposes a quantum communication scheme for sending a definite binary sequence while confirming the security of the transmission. The scheme is very suitable for sending a ciphertext in a secret-key cryptosystem so that we can detect any eavesdropper who attempts to decipher the key. Thus we can continue to use a secret key unless we detect eavesdropping and the security of a key that is used repeatedly can be enhanced to the level of one-time-pad cryptography. In our scheme, a pair of entangled photon twins is employed as a bit carrier which is encoded in a two-term superposition of four Bell states. Different bases are employed for encoding the binary sequence of a ciphertext and a random test bit. The photon twins are measured with a Bell state analyzer and any bit can be decoded from the resultant Bell state when the receiver is later notified of the coding basis through a classical channel. By opening the positions and the values of test bits, ciphertext can be read and eavesdropping is simultaneously detected.

  2. Universal health coverage in Latin American countries: how to improve solidarity-based schemes.

    PubMed

    Titelman, Daniel; Cetrángolo, Oscar; Acosta, Olga Lucía

    2015-04-04

    In this Health Policy we examine the association between the financing structure of health systems and universal health coverage. Latin American health systems encompass a wide range of financial sources, which translate into different solidarity-based schemes that combine contributory (payroll taxes) and non-contributory (general taxes) sources of financing. To move towards universal health coverage, solidarity-based schemes must heavily rely on countries' capacity to increase public expenditure in health. Improvement of solidarity-based schemes will need the expansion of mandatory universal insurance systems and strengthening of the public sector including increased fiscal expenditure. These actions demand a new model to integrate different sources of health-sector financing, including general tax revenue, social security contributions, and private expenditure. The extent of integration achieved among these sources will be the main determinant of solidarity and universal health coverage. The basic challenges for improvement of universal health coverage are not only to spend more on health, but also to reduce the proportion of out-of-pocket spending, which will need increased fiscal resources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Considerations and techniques for incorporating remotely sensed imagery into the land resource management process.

    NASA Technical Reports Server (NTRS)

    Brooner, W. G.; Nichols, D. A.

    1972-01-01

    Development of a scheme for utilizing remote sensing technology in an operational program for regional land use planning and land resource management program applications. The scheme utilizes remote sensing imagery as one of several potential inputs to derive desired and necessary data, and considers several alternative approaches to the expansion and/or reduction and analysis of data, using automated data handling techniques. Within this scheme is a five-stage program development which includes: (1) preliminary coordination, (2) interpretation and encoding, (3) creation of data base files, (4) data analysis and generation of desired products, and (5) applications.

  4. Scalable quantum computation scheme based on quantum-actuated nuclear-spin decoherence-free qubits

    NASA Astrophysics Data System (ADS)

    Dong, Lihong; Rong, Xing; Geng, Jianpei; Shi, Fazhan; Li, Zhaokai; Duan, Changkui; Du, Jiangfeng

    2017-11-01

    We propose a novel theoretical scheme of quantum computation. Nuclear spin pairs are utilized to encode decoherence-free (DF) qubits. A nitrogen-vacancy center serves as a quantum actuator to initialize, readout, and quantum control the DF qubits. The realization of CNOT gates between two DF qubits are also presented. Numerical simulations show high fidelities of all these processes. Additionally, we discuss the potential of scalability. Our scheme reduces the challenge of classical interfaces from controlling and observing complex quantum systems down to a simple quantum actuator. It also provides a novel way to handle complex quantum systems.

  5. Experimental realization of a CMOS-compatible optical directed priority encoder using cascaded micro-ring resonators

    NASA Astrophysics Data System (ADS)

    Xiao, Huifu; Li, Dezhao; Liu, Zilong; Han, Xu; Chen, Wenping; Zhao, Ting; Tian, Yonghui; Yang, Jianhong

    2018-03-01

    In this paper, we propose and experimentally demonstrate an integrated optical device that can implement the logical function of priority encoding from a 4-bit electrical signal to a 2-bit optical signal. For the proof of concept, the thermo-optic modulation scheme is adopted to tune each micro-ring resonator (MRR). A monochromatic light with the working wavelength is coupled into the input port of the device through a lensed fiber, and the four input electrical logic signals regarded as pending encode signals are applied to the micro-heaters above four MRRs to control the working states of the optical switches. The encoding results are directed to the output ports in the form of light. At last, the logical function of priority encoding with an operation speed of 10 Kbps is demonstrated successfully.

  6. Decoding stimulus features in primate somatosensory cortex during perceptual categorization

    PubMed Central

    Alvarez, Manuel; Zainos, Antonio; Romo, Ranulfo

    2015-01-01

    Neurons of the primary somatosensory cortex (S1) respond as functions of frequency or amplitude of a vibrotactile stimulus. However, whether S1 neurons encode both frequency and amplitude of the vibrotactile stimulus or whether each sensory feature is encoded by separate populations of S1 neurons is not known, To further address these questions, we recorded S1 neurons while trained monkeys categorized only one sensory feature of the vibrotactile stimulus: frequency, amplitude, or duration. The results suggest a hierarchical encoding scheme in S1: from neurons that encode all sensory features of the vibrotactile stimulus to neurons that encode only one sensory feature. We hypothesize that the dynamic representation of each sensory feature in S1 might serve for further downstream processing that leads to the monkey’s psychophysical behavior observed in these tasks. PMID:25825711

  7. Experimental demonstration of a flexible time-domain quantum channel.

    PubMed

    Xing, Xingxing; Feizpour, Amir; Hayat, Alex; Steinberg, Aephraim M

    2014-10-20

    We present an experimental realization of a flexible quantum channel where the Hilbert space dimensionality can be controlled electronically. Using electro-optical modulators (EOM) and narrow-band optical filters, quantum information is encoded and decoded in the temporal degrees of freedom of photons from a long-coherence-time single-photon source. Our results demonstrate the feasibility of a generic scheme for encoding and transmitting multidimensional quantum information over the existing fiber-optical telecommunications infrastructure.

  8. Diversity Analysis of Dairy and Nondairy Lactococcus lactis Isolates, Using a Novel Multilocus Sequence Analysis Scheme and (GTG)5-PCR Fingerprinting▿

    PubMed Central

    Rademaker, Jan L. W.; Herbet, Hélène; Starrenburg, Marjo J. C.; Naser, Sabri M.; Gevers, Dirk; Kelly, William J.; Hugenholtz, Jeroen; Swings, Jean; van Hylckama Vlieg, Johan E. T.

    2007-01-01

    The diversity of a collection of 102 lactococcus isolates including 91 Lactococcus lactis isolates of dairy and nondairy origin was explored using partial small subunit rRNA gene sequence analysis and limited phenotypic analyses. A subset of 89 strains of L. lactis subsp. cremoris and L. lactis subsp. lactis isolates was further analyzed by (GTG)5-PCR fingerprinting and a novel multilocus sequence analysis (MLSA) scheme. Two major genomic lineages within L. lactis were found. The L. lactis subsp. cremoris type-strain-like genotype lineage included both L. lactis subsp. cremoris and L. lactis subsp. lactis isolates. The other major lineage, with a L. lactis subsp. lactis type-strain-like genotype, comprised L. lactis subsp. lactis isolates only. A novel third genomic lineage represented two L. lactis subsp. lactis isolates of nondairy origin. The genomic lineages deviate from the subspecific classification of L. lactis that is based on a few phenotypic traits only. MLSA of six partial genes (atpA, encoding ATP synthase alpha subunit; pheS, encoding phenylalanine tRNA synthetase; rpoA, encoding RNA polymerase alpha chain; bcaT, encoding branched chain amino acid aminotransferase; pepN, encoding aminopeptidase N; and pepX, encoding X-prolyl dipeptidyl peptidase) revealed 363 polymorphic sites (total length, 1,970 bases) among 89 L. lactis subsp. cremoris and L. lactis subsp. lactis isolates with unique sequence types for most isolates. This allowed high-resolution cluster analysis in which dairy isolates form subclusters of limited diversity within the genomic lineages. The pheS DNA sequence analysis yielded two genetic groups dissimilar to the other genotyping analysis-based lineages, indicating a disparate acquisition route for this gene. PMID:17890345

  9. Diversity analysis of dairy and nondairy Lactococcus lactis isolates, using a novel multilocus sequence analysis scheme and (GTG)5-PCR fingerprinting.

    PubMed

    Rademaker, Jan L W; Herbet, Hélène; Starrenburg, Marjo J C; Naser, Sabri M; Gevers, Dirk; Kelly, William J; Hugenholtz, Jeroen; Swings, Jean; van Hylckama Vlieg, Johan E T

    2007-11-01

    The diversity of a collection of 102 lactococcus isolates including 91 Lactococcus lactis isolates of dairy and nondairy origin was explored using partial small subunit rRNA gene sequence analysis and limited phenotypic analyses. A subset of 89 strains of L. lactis subsp. cremoris and L. lactis subsp. lactis isolates was further analyzed by (GTG)(5)-PCR fingerprinting and a novel multilocus sequence analysis (MLSA) scheme. Two major genomic lineages within L. lactis were found. The L. lactis subsp. cremoris type-strain-like genotype lineage included both L. lactis subsp. cremoris and L. lactis subsp. lactis isolates. The other major lineage, with a L. lactis subsp. lactis type-strain-like genotype, comprised L. lactis subsp. lactis isolates only. A novel third genomic lineage represented two L. lactis subsp. lactis isolates of nondairy origin. The genomic lineages deviate from the subspecific classification of L. lactis that is based on a few phenotypic traits only. MLSA of six partial genes (atpA, encoding ATP synthase alpha subunit; pheS, encoding phenylalanine tRNA synthetase; rpoA, encoding RNA polymerase alpha chain; bcaT, encoding branched chain amino acid aminotransferase; pepN, encoding aminopeptidase N; and pepX, encoding X-prolyl dipeptidyl peptidase) revealed 363 polymorphic sites (total length, 1,970 bases) among 89 L. lactis subsp. cremoris and L. lactis subsp. lactis isolates with unique sequence types for most isolates. This allowed high-resolution cluster analysis in which dairy isolates form subclusters of limited diversity within the genomic lineages. The pheS DNA sequence analysis yielded two genetic groups dissimilar to the other genotyping analysis-based lineages, indicating a disparate acquisition route for this gene.

  10. Optical image encryption using multilevel Arnold transform and noninterferometric imaging

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Chen, Xudong

    2011-11-01

    Information security has attracted much current attention due to the rapid development of modern technologies, such as computer and internet. We propose a novel method for optical image encryption using multilevel Arnold transform and rotatable-phase-mask noninterferometric imaging. An optical image encryption scheme is developed in the gyrator transform domain, and one phase-only mask (i.e., phase grating) is rotated and updated during image encryption. For the decryption, an iterative retrieval algorithm is proposed to extract high-quality plaintexts. Conventional encoding methods (such as digital holography) have been proven vulnerably to the attacks, and the proposed optical encoding scheme can effectively eliminate security deficiency and significantly enhance cryptosystem security. The proposed strategy based on the rotatable phase-only mask can provide a new alternative for data/image encryption in the noninterferometric imaging.

  11. An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.

    PubMed

    Fout, N; Ma, Kwan-Liu

    2012-12-01

    In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

  12. Applying the Methodology of the Community College Classification Scheme to the Public Master's Colleges and Universities Sector

    ERIC Educational Resources Information Center

    Kinkead, J. Clint.; Katsinas, Stephen G.

    2011-01-01

    This work brings forward the geographically-based classification scheme for the public Master's Colleges and Universities sector. Using the same methodology developed by Katsinas and Hardy (2005) to classify community colleges, this work classifies Master's Colleges and Universities. This work has four major findings and conclusions. First, a…

  13. Self-Encoded Spread Spectrum Modulation for Robust Anti-Jamming Communication

    DTIC Science & Technology

    2009-06-30

    experience in both theoretical and experimental aspects of RF and optical communications, multi-user CDMA systems, transmitter precoding and code...the performance of DS - and FH-SESS modulation in the presence of worst-case jamming, develop innovative SESS schemes that further exploit time and...Determine BER and AJ performance of the feedback and iterative detectors in DS -SESS under pulsed-noise and multi-tone jamming • Task 2: Develop a scheme

  14. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    NASA Astrophysics Data System (ADS)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  15. Proposal for nanoscale cascaded plasmonic majority gates for non-Boolean computation.

    PubMed

    Dutta, Sourav; Zografos, Odysseas; Gurunarayanan, Surya; Radu, Iuliana; Soree, Bart; Catthoor, Francky; Naeemi, Azad

    2017-12-19

    Surface-plasmon-polariton waves propagating at the interface between a metal and a dielectric, hold the key to future high-bandwidth, dense on-chip integrated logic circuits overcoming the diffraction limitation of photonics. While recent advances in plasmonic logic have witnessed the demonstration of basic and universal logic gates, these CMOS oriented digital logic gates cannot fully utilize the expressive power of this novel technology. Here, we aim at unraveling the true potential of plasmonics by exploiting an enhanced native functionality - the majority voter. Contrary to the state-of-the-art plasmonic logic devices, we use the phase of the wave instead of the intensity as the state or computational variable. We propose and demonstrate, via numerical simulations, a comprehensive scheme for building a nanoscale cascadable plasmonic majority logic gate along with a novel referencing scheme that can directly translate the information encoded in the amplitude and phase of the wave into electric field intensity at the output. Our MIM-based 3-input majority gate displays a highly improved overall area of only 0.636 μm 2 for a single-stage compared with previous works on plasmonic logic. The proposed device demonstrates non-Boolean computational capability and can find direct utility in highly parallel real-time signal processing applications like pattern recognition.

  16. Equity of the premium of the Ghanaian national health insurance scheme and the implications for achieving universal coverage

    PubMed Central

    2013-01-01

    The Ghanaian National Health Insurance Scheme (NHIS) was introduced to provide access to adequate health care regardless of ability to pay. By law the NHIS is mandatory but because the informal sector has to make premium payment before they are enrolled, the authorities are unable to enforce mandatory nature of the scheme. The ultimate goal of the Scheme then is to provide all residents with access to adequate health care at affordable cost. In other words, the Scheme intends to achieve universal coverage. An important factor for the achievement of universal coverage is that revenue collection be equitable. The purpose of this study is to examine the vertical and horizontal equity of the premium collection of the Scheme. The Kakwani index method as well as graphical analysis was used to study the vertical equity. Horizontal inequity was measured through the effect of the premium on redistribution of ability to pay of members. The extent to which the premium could cause catastrophic expenditure was also examined. The results showed that revenue collection was both vertically and horizontally inequitable. The horizontal inequity had a greater effect on redistribution of ability to pay than vertical inequity. The computation of catastrophic expenditure showed that a small minority of the poor were likely to incur catastrophic expenditure from paying the premium a situation that could impede the achievement of universal coverage. The study provides recommendations to improve the inequitable system of premium payment to help achieve universal coverage. PMID:23294982

  17. Experimental demonstration of polarization encoding quantum key distribution system based on intrinsically stable polarization-modulated units.

    PubMed

    Wang, Jindong; Qin, Xiaojuan; Jiang, Yinzhu; Wang, Xiaojing; Chen, Liwei; Zhao, Feng; Wei, Zhengjun; Zhang, Zhiming

    2016-04-18

    A proof-of-principle demonstration of a one-way polarization encoding quantum key distribution (QKD) system is demonstrated. This approach can automatically compensate for birefringence and phase drift. This is achieved by constructing intrinsically stable polarization-modulated units (PMUs) to perform the encoding and decoding, which can be used with four-state protocol, six-state protocol, and the measurement-device-independent (MDI) scheme. A polarization extinction ratio of about 30 dB was maintained for several hours over a 50 km optical fiber without any adjustments to our setup, which evidences its potential for use in practical applications.

  18. Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhiyong; Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian 361005; Smith, Pieter E. S.

    Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. Bymore » porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.« less

  19. Complexity reduction in the H.264/AVC using highly adaptive fast mode decision based on macroblock motion activity

    NASA Astrophysics Data System (ADS)

    Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir

    2015-11-01

    The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.

  20. Electron-Nuclear Quantum Information Processing

    DTIC Science & Technology

    2008-11-13

    quantum information processing that exploits the anisotropic hyperfine coupling. This scheme enables universal control over a 1-electron, N-nuclear spin...exploits the anisotropic hyperfine coupling. This scheme enables universal control over a 1-electron, N-nuclear spin system, addressing only a...sample of irradiated malonic acid. (a) Papers published in peer-reviewed journals (N/A for none) Universal control of nuclear spins via anisotropic

  1. Peer Mentoring during the Transition to University: Assessing the Usage of a Formal Scheme within the UK

    ERIC Educational Resources Information Center

    Collings, Rosalyn; Swanson, Vivien; Watkins, Ruth

    2016-01-01

    Although mentoring has become increasingly popular within UK higher education, there is little evaluative research. The current longitudinal study aimed to evaluate the usage of a peer mentoring scheme during a first semester at university amongst 124 students. Results indicate that during the first week at university the majority accessed the…

  2. Characterization of Sensitivity Encoded Silicon Photomultiplier (SeSP) with 1-Dimensional and 2-Dimensional Encoding for High Resolution PET/MR

    NASA Astrophysics Data System (ADS)

    Omidvari, Negar; Schulz, Volkmar

    2015-06-01

    This paper evaluates the performance of a new type of PET detectors called sensitivity encoded silicon photomultiplier (SeSP), which allows a direct coupling of small-pitch crystal arrays to the detector with a reduction in the number of readout channels. Four SeSP devices with two separate encoding schemes of 1D and 2D were investigated in this study. Furthermore, both encoding schemes were manufactured in two different sizes of 4 ×4 mm2 and 7. 73 ×7. 9 mm2, in order to investigate the effect of size on detector parameters. All devices were coupled to LYSO crystal arrays with 1 mm pitch size and 10 mm height, with optical isolation between crystals. The characterization was done for the key parameters of crystal-identification, energy resolution, and time resolution as a function of triggering threshold and over-voltage (OV). Position information was archived using the center of gravity (CoG) algorithm and a least squares approach (LSQA) in combination with a mean light matrix around the photo-peak. The positioning results proved the capability of all four SeSP devices in precisely identifying all crystals coupled to the sensors. Energy resolution was measured at different bias voltages, varying from 12% to 18% (FWHM) and paired coincidence time resolution (pCTR) of 384 ps to 1.1 ns was obtained for different SeSP devices at about 18 °C room temperature. However, the best time resolution was achieved at the highest over-voltage, resulting in a noise ratio of 99.08%.

  3. Real-time modeling of primitive environments through wavelet sensors and Hebbian learning

    NASA Astrophysics Data System (ADS)

    Vaccaro, James M.; Yaworsky, Paul S.

    1999-06-01

    Modeling the world through sensory input necessarily provides a unique perspective for the observer. Given a limited perspective, objects and events cannot always be encoded precisely but must involve crude, quick approximations to deal with sensory information in a real- time manner. As an example, when avoiding an oncoming car, a pedestrian needs to identify the fact that a car is approaching before ascertaining the model or color of the vehicle. In our methodology, we use wavelet-based sensors with self-organized learning to encode basic sensory information in real-time. The wavelet-based sensors provide necessary transformations while a rank-based Hebbian learning scheme encodes a self-organized environment through translation, scale and orientation invariant sensors. Such a self-organized environment is made possible by combining wavelet sets which are orthonormal, log-scale with linear orientation and have automatically generated membership functions. In earlier work we used Gabor wavelet filters, rank-based Hebbian learning and an exponential modulation function to encode textural information from images. Many different types of modulation are possible, but based on biological findings the exponential modulation function provided a good approximation of first spike coding of `integrate and fire' neurons. These types of Hebbian encoding schemes (e.g., exponential modulation, etc.) are useful for quick response and learning, provide several advantages over contemporary neural network learning approaches, and have been found to quantize data nonlinearly. By combining wavelets with Hebbian learning we can provide a real-time front-end for modeling an intelligent process, such as the autonomous control of agents in a simulated environment.

  4. Evaluation of color encodings for high dynamic range pixels

    NASA Astrophysics Data System (ADS)

    Boitard, Ronan; Mantiuk, Rafal K.; Pouli, Tania

    2015-03-01

    Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.

  5. Layered compression for high-precision depth data.

    PubMed

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.

  6. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  7. Lossless quantum data compression with exponential penalization: an operational interpretation of the quantum Rényi entropy.

    PubMed

    Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve

    2017-11-07

    Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.

  8. Security proof of continuous-variable quantum key distribution using three coherent states

    NASA Astrophysics Data System (ADS)

    Brádler, Kamil; Weedbrook, Christian

    2018-02-01

    We introduce a ternary quantum key distribution (QKD) protocol and asymptotic security proof based on three coherent states and homodyne detection. Previous work had considered the binary case of two coherent states and here we nontrivially extend this to three. Our motivation is to leverage the practical benefits of both discrete and continuous (Gaussian) encoding schemes creating a best-of-both-worlds approach; namely, the postprocessing of discrete encodings and the hardware benefits of continuous ones. We present a thorough and detailed security proof in the limit of infinite signal states which allows us to lower bound the secret key rate. We calculate this is in the context of collective eavesdropping attacks and reverse reconciliation postprocessing. Finally, we compare the ternary coherent state protocol to other well-known QKD schemes (and fundamental repeaterless limits) in terms of secret key rates and loss.

  9. Telidon Videotex presentation level protocol: Augmented picture description instructions

    NASA Astrophysics Data System (ADS)

    Obrien, C. D.; Brown, H. G.; Smirle, J. C.; Lum, Y. F.; Kukulka, J. Z.; Kwan, A.

    1982-02-01

    The Telidon Videotex system is a method by which graphic and textual information and transactional services can be accessed from information sources by the general public. In order to transmit information to a Telidon terminal at a minimum bandwidth, and in a manner independent of the type of communications channel, a coding scheme was devised which permits the encoding of a picture into the geometric drawing elements which compose it. These picture description instructions are an alpha geometric coding model and are based on the primitives of POINT, LINE, ARC, RECTANGLE, POLYGON, and INCREMENT. Text is encoded as (ASCII) characters along with a supplementary table of accents and special characters. A mosaic shape table is included for compatibility. A detailed specification of the coding scheme and a description of the principles which make it independent of communications channel and display hardware are provided.

  10. Superconducting quantum circuits theory and application

    NASA Astrophysics Data System (ADS)

    Deng, Xiuhao

    Superconducting quantum circuit models are widely used to understand superconducting devices. This thesis consists of four studies wherein the superconducting quantum circuit is used to illustrate challenges related to quantum information encoding and processing, quantum simulation, quantum signal detection and amplification. The existence of scalar Aharanov-Bohm phase has been a controversial topic for decades. Scalar AB phase, defined as time integral of electric potential, gives rises to an extra phase factor in wavefunction. We proposed a superconducting quantum Faraday cage to detect temporal interference effect as a consequence of scalar AB phase. Using the superconducting quantum circuit model, the physical system is solved and resulting AB effect is predicted. Further discussion in this chapter shows that treating the experimental apparatus quantum mechanically, spatial scalar AB effect, proposed by Aharanov-Bohm, can't be observed. Either a decoherent interference apparatus is used to observe spatial scalar AB effect, or a quantum Faraday cage is used to observe temporal scalar AB effect. The second study involves protecting a quantum system from losing coherence, which is crucial to any practical quantum computation scheme. We present a theory to encode any qubit, especially superconducting qubits, into a universal quantum degeneracy point (UQDP) where low frequency noise is suppressed significantly. Numerical simulations for superconducting charge qubit using experimental parameters show that its coherence time is prolong by two orders of magnitude using our universal degeneracy point approach. With this improvement, a set of universal quantum gates can be performed at high fidelity without losing too much quantum coherence. Starting in 2004, the use of circuit QED has enabled the manipulation of superconducting qubits with photons. We applied quantum optical approach to model coupled resonators and obtained a four-wave mixing toolbox to operate photons states. The model and toolbox are engineered with a superconducting quantum circuit where two superconducting resonators are coupled via the UQDP circuit. Using fourth order perturbation theory one can realize a complete set of quantum operations between these two photon modes. This helps open a new field to treat photon modes as qubits. Additional, a three-wave mixing scheme using phase qubits permits one to engineer the coupling Hamiltonian using a phase qubit as a tunable coupler. Along with Feynman's idea using quantum to simulate quantum, superconducting quantum simulators have been studied intensively recently. Taking the advantage of mesoscopic size of superconducting circuit and local tunability, we came out the idea to simulate quantum phase transition due to disorder. Our first paper was to propose a superconducting quantum simulator of Bose-Hubbard model to do site-wise manipulation and observe Mott-insulator to superfluid phase transition. The side-band cooling of an array of superconducting resonators is solved after the paper was published. In light of the developed technology in manipulating quantum information with superconducting circuit, one can couple other quantum oscillator system to superconducting resonators in order manipulation of its quantum states or parametric amplification of weak quantum signal. A theory that works for different coupling schemes has been studied in chapter 5. This will be a platform for further research.

  11. Universal block diagram based modeling and simulation schemes for fractional-order control systems.

    PubMed

    Bai, Lu; Xue, Dingyü

    2017-05-08

    Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Implementation of trinary logic in a polarization encoded optical shadow-casting scheme.

    PubMed

    Rizvi, R A; Zaheer, K; Zubairy, M S

    1991-03-10

    The design of various multioutput trinary combinational logic units by a polarization encoded optical shadow-casting (POSC) technique is presented. The POSC modified algorithm is employed to design and implement these logic elements in a trinary number system with separate and simultaneous generation of outputs. A detailed solution of the POSC logic equations for a fixed source plane and a fixed decoding mask is given to obtain input pixel coding for a trinary half-adder, full adder, and subtractor.

  13. New adaptive color quantization method based on self-organizing maps.

    PubMed

    Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai

    2005-01-01

    Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.

  14. Semi-quantum Secure Direct Communication Scheme Based on Bell States

    NASA Astrophysics Data System (ADS)

    Xie, Chen; Li, Lvzhou; Situ, Haozhen; He, Jianhao

    2018-06-01

    Recently, the idea of semi-quantumness has been often used in designing quantum cryptographic schemes, which allows some of the participants of a quantum cryptographic scheme to remain classical. One of the reasons why this idea is popular is that it allows a quantum information processing task to be accomplished by using quantum resources as few as possible. In this paper, we extend the idea to quantum secure direct communication(QSDC) by proposing a semi-quantum secure direct communication scheme. In the scheme, the message sender, Alice, encodes each bit into a Bell state |φ+> = 1/{√2}(|00> +|11> ) or |{Ψ }+> = 1/{√ 2}(|01> +|10> ), and the message receiver, Bob, who is classical in the sense that he can either let the qubit he received reflect undisturbed, or measure the qubit in the computational basis |0>, |1> and then resend it in the state he found. Moreover, the security analysis of our scheme is also given.

  15. Decision algorithm for data center vortex beam receiver

    NASA Astrophysics Data System (ADS)

    Kupferman, Judy; Arnon, Shlomi

    2017-12-01

    We present a new scheme for a vortex beam communications system which exploits the radial component p of Laguerre-Gauss modes in addition to the azimuthal component l generally used. We derive a new encoding algorithm which makes use of the spatial distribution of intensity to create an alphabet dictionary for communication. We suggest an application of the scheme as part of an optical wireless link for intra data center communication. We investigate the probability of error in decoding, for several detector options.

  16. Hybrid ququart-encoded quantum cryptography protected by Kochen-Specker contextuality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabello, Adan; Department of Physics, Stockholm University, S-10691 Stockholm; D'Ambrosio, Vincenzo

    2011-09-15

    Quantum cryptographic protocols based on complementarity are not secure against attacks in which complementarity is imitated with classical resources. The Kochen-Specker (KS) theorem provides protection against these attacks, without requiring entanglement or spatially separated composite systems. We analyze the maximum tolerated noise to guarantee the security of a KS-protected cryptographic scheme against these attacks and describe a photonic realization of this scheme using hybrid ququarts defined by the polarization and orbital angular momentum of single photons.

  17. Encoding methods for B1+ mapping in parallel transmit systems at ultra high field

    NASA Astrophysics Data System (ADS)

    Tse, Desmond H. Y.; Poole, Michael S.; Magill, Arthur W.; Felder, Jörg; Brenner, Daniel; Jon Shah, N.

    2014-08-01

    Parallel radiofrequency (RF) transmission, either in the form of RF shimming or pulse design, has been proposed as a solution to the B1+ inhomogeneity problem in ultra high field magnetic resonance imaging. As a prerequisite, accurate B1+ maps from each of the available transmit channels are required. In this work, four different encoding methods for B1+ mapping, namely 1-channel-on, all-channels-on-except-1, all-channels-on-1-inverted and Fourier phase encoding, were evaluated using dual refocusing acquisition mode (DREAM) at 9.4 T. Fourier phase encoding was demonstrated in both phantom and in vivo to be the least susceptible to artefacts caused by destructive RF interference at 9.4 T. Unlike the other two interferometric encoding schemes, Fourier phase encoding showed negligible dependency on the initial RF phase setting and therefore no prior B1+ knowledge is required. Fourier phase encoding also provides a flexible way to increase the number of measurements to increase SNR, and to allow further reduction of artefacts by weighted decoding. These advantages of Fourier phase encoding suggest that it is a good choice for B1+ mapping in parallel transmit systems at ultra high field.

  18. Heterodyne detection using spectral line pairing for spectral phase encoding optical code division multiple access and dynamic dispersion compensation.

    PubMed

    Yang, Yi; Foster, Mark; Khurgin, Jacob B; Cooper, A Brinton

    2012-07-30

    A novel coherent optical code-division multiple access (OCDMA) scheme is proposed that uses spectral line pairing to generate signals suitable for heterodyne decoding. Both signal and local reference are transmitted via a single optical fiber and a simple balanced receiver performs sourceless heterodyne detection, canceling speckle noise and multiple-access interference (MAI). To validate the idea, a 16 user fully loaded phase encoded system is simulated. Effects of fiber dispersion on system performance are studied as well. Both second and third order dispersion management is achieved by using a spectral phase encoder to adjust phase shifts of spectral components at the optical network unit (ONU).

  19. Architecture for VLSI design of Reed-Solomon encoders

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.

    1981-01-01

    The logic structure of a universal VLSI chip called the symbol-slice Reed-Solomon (RS) encoder chip is discussed. An RS encoder can be constructed by cascading and properly interconnecting a group of such VLSI chips. As a design example, it is shown that a (255,223) RD encoder requiring around 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical interconnected VLSI RS encoder chips. Besides the size advantage, the VLSI RS encoder also has the potential advantages of requiring less power and having a higher reliability.

  20. Classification scheme for phenomenological universalities in growth problems in physics and other sciences.

    PubMed

    Castorina, P; Delsanto, P P; Guiot, C

    2006-05-12

    A classification in universality classes of broad categories of phenomenologies, belonging to physics and other disciplines, may be very useful for a cross fertilization among them and for the purpose of pattern recognition and interpretation of experimental data. We present here a simple scheme for the classification of nonlinear growth problems. The success of the scheme in predicting and characterizing the well known Gompertz, West, and logistic models, suggests to us the study of a hitherto unexplored class of nonlinear growth problems.

  1. Proof of the Feasibility of Coherent and Incoherent Schemes for Pumping a Gamma-Ray Laser

    DTIC Science & Technology

    1988-07-01

    DIP!; ilLE-CWPj AD-A 799 638 The University of Texas at DallasCenter for Quantlin, Electronics The Gamma-Ray Laser Project Quarterly Report April...AND INCOHERENT SCHEMES FOR PUMPING A GAMMA-RAY LASER Principal Investigator: Carl B. Collins The University of Texas at Dallas Center for Quantum...FEASIBILITY OF Quarterly Technical Progress COHERENT AND INCOHERENT SCHEMES /I/RR - 61WARA FOR PUMPING A GAMMA-RAY LASER 6.PERFORMINO ORG. REPORT NUMBER

  2. Architecture for VLSI design of Reed-Solomon encoders

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.

    1982-01-01

    A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.

  3. Architecture for VLSI design of Reed-Solomon encoders

    NASA Astrophysics Data System (ADS)

    Liu, K. Y.

    1982-02-01

    A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.

  4. Comparative assessment of H.265/MPEG-HEVC, VP9, and H.264/MPEG-AVC encoders for low-delay video applications

    NASA Astrophysics Data System (ADS)

    Grois, Dan; Marpe, Detlev; Nguyen, Tung; Hadar, Ofer

    2014-09-01

    The popularity of low-delay video applications dramatically increased over the last years due to a rising demand for realtime video content (such as video conferencing or video surveillance), and also due to the increasing availability of relatively inexpensive heterogeneous devices (such as smartphones and tablets). To this end, this work presents a comparative assessment of the two latest video coding standards: H.265/MPEG-HEVC (High-Efficiency Video Coding), H.264/MPEG-AVC (Advanced Video Coding), and also of the VP9 proprietary video coding scheme. For evaluating H.264/MPEG-AVC, an open-source x264 encoder was selected, which has a multi-pass encoding mode, similarly to VP9. According to experimental results, which were obtained by using similar low-delay configurations for all three examined representative encoders, it was observed that H.265/MPEG-HEVC provides significant average bit-rate savings of 32.5%, and 40.8%, relative to VP9 and x264 for the 1-pass encoding, and average bit-rate savings of 32.6%, and 42.2% for the 2-pass encoding, respectively. On the other hand, compared to the x264 encoder, typical low-delay encoding times of the VP9 encoder, are about 2,000 times higher for the 1-pass encoding, and are about 400 times higher for the 2-pass encoding.

  5. HERMES: Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy

    PubMed Central

    Chan, Kimberly L.; Puts, Nicolaas A. J.; Schär, Michael; Barker, Peter B.; Edden, Richard A. E.

    2017-01-01

    Purpose To investigate a novel Hadamard-encoded spectral editing scheme and evaluate its performance in simultaneously quantifying N-acetyl aspartate (NAA) and N-acetyl aspartyl glutamate (NAAG) at 3 Tesla. Methods Editing pulses applied according to a Hadamard encoding scheme allow the simultaneous acquisition of multiple metabolites. The method, called HERMES (Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy), was optimized to detect NAA and NAAG simultaneously using density-matrix simulations and validated in phantoms at 3T. In vivo data were acquired in the centrum semiovale of 12 normal subjects. The NAA:NAAG concentration ratio was determined by modeling in vivo data using simulated basis functions. Simulations were also performed for potentially coedited molecules with signals within the detected NAA/NAAG region. Results Simulations and phantom experiments show excellent segregation of NAA and NAAG signals into the intended spectra, with minimal crosstalk. Multiplet patterns show good agreement between simulations and phantom and in vivo data. In vivo measurements show that the relative peak intensities of the NAA and NAAG spectra are consistent with a NAA:NAAG concentration ratio of 4.22:1 in good agreement with literature. Simulations indicate some coediting of aspartate and glutathione near the detected region (editing efficiency: 4.5% and 78.2%, respectively, for the NAAG reconstruction and 5.1% and 19.5%, respectively, for the NAA reconstruction). Conclusion The simultaneous and separable detection of two otherwise overlapping metabolites using HERMES is possible at 3T. PMID:27089868

  6. Active module identification in intracellular networks using a memetic algorithm with a new binary decoding scheme.

    PubMed

    Li, Dong; Pan, Zhisong; Hu, Guyu; Zhu, Zexuan; He, Shan

    2017-03-14

    Active modules are connected regions in biological network which show significant changes in expression over particular conditions. The identification of such modules is important since it may reveal the regulatory and signaling mechanisms that associate with a given cellular response. In this paper, we propose a novel active module identification algorithm based on a memetic algorithm. We propose a novel encoding/decoding scheme to ensure the connectedness of the identified active modules. Based on the scheme, we also design and incorporate a local search operator into the memetic algorithm to improve its performance. The effectiveness of proposed algorithm is validated on both small and large protein interaction networks.

  7. Consolidating the social health insurance schemes in China: towards an equitable and efficient health system.

    PubMed

    Meng, Qingyue; Fang, Hai; Liu, Xiaoyun; Yuan, Beibei; Xu, Jin

    2015-10-10

    Fragmentation in social health insurance schemes is an important factor for inequitable access to health care and financial protection for people covered by different health insurance schemes in China. To fulfil its commitment of universal health coverage by 2020, the Chinese Government needs to prioritise addressing this issue. After analysing the situation of fragmentation, this Review summarises efforts to consolidate health insurance schemes both in China and internationally. Rural migrants, elderly people, and those with non-communicable diseases in China will greatly benefit from consolidation of the existing health insurance schemes with extended funding pools, thereby narrowing the disparities among health insurance schemes in fund level and benefit package. Political commitments, institutional innovations, and a feasible implementation plan are the major elements needed for success in consolidation. Achievement of universal health coverage in China needs systemic strategies including consolidation of the social health insurance schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Simple scheme to implement decoy-state reference-frame-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Zhu, Jianrong; Wang, Qin

    2018-06-01

    We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.

  9. Student and Staff Perceptions of a Vacation Research Assistantship Scheme

    ERIC Educational Resources Information Center

    Penn, Felicity; Stephens, Danielle; Morgan, Jessica; Upton, Penney; Upton, Dominic

    2013-01-01

    There is a push for universities to equip graduates with desirable employability skills and "hands-on" experience. This article explores the perceptions of students and staff experiences of a research assistantship scheme. Nine students from the University of Worcester were given the opportunity to work as a student vacation researcher…

  10. Indigenous Tutorial Assistance Scheme. Tertiary Tuition and Beyond: Transitioning with Strengths and Promoting Opportunities

    ERIC Educational Resources Information Center

    Wilks, Judith; Fleeton, Ellen Radnidge; Wilson, Katie

    2017-01-01

    The Indigenous Tutorial Assistance Scheme-Tertiary Tuition (ITAS-TT) has provided Australian government funding for one-to-one and group tutorial study support for Aboriginal and Torres Strait Islander students attending Australian universities since 1989. It has been a central plank supporting Indigenous university students in their studies.…

  11. Extensive Listening in a Colombian University: Process, Product, and Perceptions

    ERIC Educational Resources Information Center

    Mayora, Carlos A.

    2017-01-01

    The current paper reports an experience implementing a small-scale narrow listening scheme (one of the varieties of extensive listening) with intermediate learners of English as a foreign language in a Colombian university. The paper presents (a) how the scheme was designed and implemented, including materials and procedures (the process); (b) how…

  12. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  13. The GuideLine Interchange Format

    PubMed Central

    Ohno-Machado, Lucila; Gennari, John H.; Murphy, Shawn N.; Jain, Nilesh L.; Tu, Samson W.; Oliver, Diane E.; Pattison-Gordon, Edward; Greenes, Robert A.; Shortliffe, Edward H.; Barnett, G. Octo

    1998-01-01

    Objective: To allow exchange of clinical practice guidelines among institutions and computer-based applications. Design: The GuideLine Interchange Format (GLIF) specification consists of the GLIF model and the GLIF syntax. The GLIF model is an object-oriented representation that consists of a set of classes for guideline entities, attributes for those classes, and data types for the attribute values. The GLIF syntax specifies the format of the test file that contains the encoding. Methods: Researchers from the InterMed Collaboratory at Columbia University, Harvard University (Brigham and Women's Hospital and Massachusetts General Hospital), and Stanford University analyzed four existing guideline systems to derive a set of requirements for guideline representation. The GLIF specification is a consensus representation developed through a brainstorming process. Four clinical guidelines were encoded in GLIF to assess its expressivity and to study the variability that occurs when two people from different sites encode the same guideline. Results: The encoders reported that GLIF was adequately expressive. A comparison of the encodings revealed substantial variability. Conclusion: GLIF was sufficient to model the guidelines for the four conditions that were examined. GLIF needs improvement in standard representation of medical concepts, criterion logic, temporal information, and uncertainty. PMID:9670133

  14. Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.

    PubMed

    Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu

    2017-03-15

    The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.

  15. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  16. Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.

    PubMed

    Palma, Eduardo; Salinas, Lilian; Aracena, Julio

    2016-03-01

    Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. An improvement in mass flux convective parameterizations and its impact on seasonal simulations using a coupled model

    NASA Astrophysics Data System (ADS)

    Elsayed Yousef, Ahmed; Ehsan, M. Azhar; Almazroui, Mansour; Assiri, Mazen E.; Al-Khalaf, Abdulrahman K.

    2017-02-01

    A new closure and a modified detrainment for the simplified Arakawa-Schubert (SAS) cumulus parameterization scheme are proposed. In the modified convective scheme which is named as King Abdulaziz University (KAU) scheme, the closure depends on both the buoyancy force and the environment mean relative humidity. A lateral entrainment rate varying with environment relative humidity is proposed and tends to suppress convection in a dry atmosphere. The detrainment rate also varies with environment relative humidity. The KAU scheme has been tested in a single column model (SCM) and implemented in a coupled global climate model (CGCM). Increased coupling between environment and clouds in the KAU scheme results in improved sensitivity of the depth and strength of convection to environmental humidity compared to the original SAS scheme. The new scheme improves precipitation simulation with better representations of moisture and temperature especially during suppressed convection periods. The KAU scheme implemented in the Seoul National University (SNU) CGCM shows improved precipitation over the tropics. The simulated precipitation pattern over the Arabian Peninsula and Northeast African region is also improved.

  18. Eddy current compensated double diffusion encoded (DDE) MRI.

    PubMed

    Mueller, Lars; Wetscherek, Andreas; Kuder, Tristan Anselm; Laun, Frederik Bernd

    2017-01-01

    Eddy currents might lead to image distortions in diffusion-weighted echo planar imaging. A method is proposed to reduce their effects on double diffusion encoding (DDE) MRI experiments and the thereby derived microscopic fractional anisotropy (μFA). The twice-refocused spin echo scheme was adapted for DDE measurements. To assess the effect of individual diffusion encodings on the image distortions, measurements of a grid of plastic rods in water were performed. The effect of eddy current compensation on μFA measurements was evaluated in the brains of six healthy volunteers. The use of an eddy current compensation reduced the signal variation. As expected, the distortions caused by the second encoding were larger than those of the first encoding, entailing a stronger need to compensate for them. For an optimal result, however, both encodings had to be compensated. The artifact reduction strongly improved the measurement of the μFA in ventricles and gray matter by reducing the overestimation. An effect of the compensation on absolute μFA values in white matter was not observed. It is advisable to compensate both encodings in DDE measurements for eddy currents. Magn Reson Med 77:328-335, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Node synchronization schemes for the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Swanson, L.; Arnold, S.

    1992-01-01

    The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.

  20. Digital Model of Fourier and Fresnel Quantized Holograms

    NASA Astrophysics Data System (ADS)

    Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.

    Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.

  1. Discrete cosine transform and hash functions toward implementing a (robust-fragile) watermarking scheme

    NASA Astrophysics Data System (ADS)

    Al-Mansoori, Saeed; Kunhu, Alavi

    2013-10-01

    This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.

  2. Investigation of Different Constituent Encoders in a Turbo-code Scheme for Reduced Decoder Complexity

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.

    1998-01-01

    A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.

  3. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  4. A Novel Texture-Quantization-Based Reversible Multiple Watermarking Scheme Applied to Health Information System.

    PubMed

    Turuk, Mousami; Dhande, Ashwin

    2018-04-01

    The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.

  5. The Role of Nonlinear Gradients in Parallel Imaging: A k-Space Based Analysis.

    PubMed

    Galiana, Gigi; Stockmann, Jason P; Tam, Leo; Peters, Dana; Tagare, Hemant; Constable, R Todd

    2012-09-01

    Sequences that encode the spatial information of an object using nonlinear gradient fields are a new frontier in MRI, with potential to provide lower peripheral nerve stimulation, windowed fields of view, tailored spatially-varying resolution, curved slices that mirror physiological geometry, and, most importantly, very fast parallel imaging with multichannel coils. The acceleration for multichannel images is generally explained by the fact that curvilinear gradient isocontours better complement the azimuthal spatial encoding provided by typical receiver arrays. However, the details of this complementarity have been more difficult to specify. We present a simple and intuitive framework for describing the mechanics of image formation with nonlinear gradients, and we use this framework to review some the main classes of nonlinear encoding schemes.

  6. Optical encrypted holographic memory using triple random phase-encoded multiplexing in photorefractive LiNbO3:Fe crystal

    NASA Astrophysics Data System (ADS)

    Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching

    2000-10-01

    We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.

  7. Optical noise-free image encryption based on quick response code and high dimension chaotic system in gyrator transform domain

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Xu, Minjie; Tian, Ailing

    2017-04-01

    A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.

  8. Hospital Coding Practice, Data Quality, and DRG-Based Reimbursement under the Thai Universal Coverage Scheme

    ERIC Educational Resources Information Center

    Pongpirul, Krit

    2011-01-01

    In the Thai Universal Coverage scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group (DRG) reimbursement. Questionable quality of the submitted DRG codes has been of concern whereas knowledge about hospital coding practice has been lacking. The objectives of this thesis are (1) To explore hospital coding…

  9. Supporting the Work Arrangements of Cooperating Teachers and University Supervisors to Better Train Preservice Teachers: A New Theoretical Contribution

    ERIC Educational Resources Information Center

    Escalié, Guillaume; Chaliès, Sébastien

    2016-01-01

    This longitudinal case study examines whether a school-based training scheme that brings together different categories of teacher educators (university supervisors and cooperating teachers) engenders true collective training activity and, if so, whether this collective work contributes to pre-service teacher education. The scheme grew out of a…

  10. The Why, What, and Impact of GPA at Oxford Brookes University

    ERIC Educational Resources Information Center

    Andrews, Matthew

    2016-01-01

    This paper examines the introduction at Oxford Brookes University of a Grade Point Average (GPA) scheme alongside the traditional honours degree classification. It considers the reasons for the introduction of GPA, the way in which the scheme was implemented, and offers an insight into the impact of GPA at Brookes. Finally, the paper considers…

  11. Bandwidth reduction for video-on-demand broadcasting using secondary content insertion

    NASA Astrophysics Data System (ADS)

    Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy

    2005-01-01

    An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.

  12. Experimental investigation of colorless ONU employing superstructured fiber Bragg gratings in WDM/OCDMA-PON

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Cheng, Liang; Chen, Biao

    2009-11-01

    Colorless optical network unit (ONU) is a very important concept for the wavelength division multiplexing (WDM) based passive optical networks (PON). We present a novel scheme to construct non-wavelength-selective ONUs in WDM/OCDMA-PON by making use of the broad spectrum band of superstructure fiber Bragg gratings (SSFBGs). The experiment results reveal that the spectrum-sliced encoded signals from different wavelength channels can be successfully decoded with the same SSFBGs, and thus the proposed colorless ONU scheme is proved to be feasible.

  13. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  14. An efficient coding theory for a dynamic trajectory predicts non-uniform allocation of entorhinal grid cells to modules.

    PubMed

    Mosheiff, Noga; Agmon, Haggai; Moriel, Avraham; Burak, Yoram

    2017-06-01

    Grid cells in the entorhinal cortex encode the position of an animal in its environment with spatially periodic tuning curves with different periodicities. Recent experiments established that these cells are functionally organized in discrete modules with uniform grid spacing. Here we develop a theory for efficient coding of position, which takes into account the temporal statistics of the animal's motion. The theory predicts a sharp decrease of module population sizes with grid spacing, in agreement with the trend seen in the experimental data. We identify a simple scheme for readout of the grid cell code by neural circuitry, that can match in accuracy the optimal Bayesian decoder. This readout scheme requires persistence over different timescales, depending on the grid cell module. Thus, we propose that the brain may employ an efficient representation of position which takes advantage of the spatiotemporal statistics of the encoded variable, in similarity to the principles that govern early sensory processing.

  15. Improved field free line magnetic particle imaging using saddle coils.

    PubMed

    Erbe, Marlitt; Sattel, Timo F; Buzug, Thorsten M

    2013-12-01

    Magnetic particle imaging (MPI) is a novel tracer-based imaging method detecting the distribution of superparamagnetic iron oxide (SPIO) nanoparticles in vivo in three dimensions and in real time. Conventionally, MPI uses the signal emitted by SPIO tracer material located at a field free point (FFP). To increase the sensitivity of MPI, however, an alternative encoding scheme collecting the particle signal along a field free line (FFL) was proposed. To provide the magnetic fields needed for line imaging in MPI, a very efficient scanner setup regarding electrical power consumption is needed. At the same time, the scanner needs to provide a high magnetic field homogeneity along the FFL as well as parallel to its alignment to prevent the appearance of artifacts, using efficient radon-based reconstruction methods arising for a line encoding scheme. This work presents a dynamic FFL scanner setup for MPI that outperforms all previously presented setups in electrical power consumption as well as magnetic field quality.

  16. An efficient coding theory for a dynamic trajectory predicts non-uniform allocation of entorhinal grid cells to modules

    PubMed Central

    Mosheiff, Noga; Agmon, Haggai; Moriel, Avraham

    2017-01-01

    Grid cells in the entorhinal cortex encode the position of an animal in its environment with spatially periodic tuning curves with different periodicities. Recent experiments established that these cells are functionally organized in discrete modules with uniform grid spacing. Here we develop a theory for efficient coding of position, which takes into account the temporal statistics of the animal’s motion. The theory predicts a sharp decrease of module population sizes with grid spacing, in agreement with the trend seen in the experimental data. We identify a simple scheme for readout of the grid cell code by neural circuitry, that can match in accuracy the optimal Bayesian decoder. This readout scheme requires persistence over different timescales, depending on the grid cell module. Thus, we propose that the brain may employ an efficient representation of position which takes advantage of the spatiotemporal statistics of the encoded variable, in similarity to the principles that govern early sensory processing. PMID:28628647

  17. Real-time motion-based H.263+ frame rate control

    NASA Astrophysics Data System (ADS)

    Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay

    1998-12-01

    Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.

  18. Low-Complexity, Digital Encoder/Modulator Developed for High-Data-Rate Satellite B-ISDN Applications

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The Space Electronics Division at the NASA Lewis Research Center is developing advanced electronic technologies for the space communications and remote sensing systems of tomorrow. As part of the continuing effort to advance the state-of-the-art in satellite communications and remote sensing systems, Lewis developed a low-cost, modular, programmable, and reconfigurable all-digital encoder-modulator (DEM) for medium- to high-data-rate radiofrequency communication links. The DEM is particularly well suited to high-data-rate downlinks to ground terminals or direct data downlinks from near-Earth science platforms. It can support data rates up to 250 megabits per second (Mbps) and several modulation schemes, including the traditional binary phase-shift keying (BPSK) and quadrature phase-shift keying (QPSK) modes, as well as higher order schemes such as 8 phase-shift keying (8PSK) and 16 quadrature amplitude modulation (16QAM). The DEM architecture also can precompensate for channel disturbances and alleviate amplitude degradations caused by nonlinear transponder characteristics.

  19. An attack aimed at active phase compensation in one-way phase-encoded QKD systems

    NASA Astrophysics Data System (ADS)

    Dong, Zhao-Yue; Yu, Ning-Na; Wei, Zheng-Jun; Wang, Jin-Dong; Zhang, Zhi-Ming

    2014-08-01

    Phase drift is an inherent problem in one-way phase-encoded quantum key distribution (QKD) systems. Although combining passive with active phase compensation (APC) processes can effectively compensate for the phase drift, the security problems brought about by these processes are rarely considered. In this paper, we point out a security hole in the APC process and put forward a corresponding attack scheme. Under our proposed attack, the quantum bit error rate (QBER) of the QKD can be close to zero for some conditions. However, under the same conditions the ratio r of the key "0" and the key "1" which Bob (the legal communicators Alice and Bob) gets is no longer 1:1 but 2:1, which may expose Eve (the eavesdropper). In order to solve this problem, we modify the resend strategy of the attack scheme, which can force r to reach 1 and the QBER to be lower than the tolerable QBER.

  20. A multispectral photon-counting double random phase encoding scheme for image authentication.

    PubMed

    Yi, Faliu; Moon, Inkyu; Lee, Yeon H

    2014-05-20

    In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.

  1. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    NASA Astrophysics Data System (ADS)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  2. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  3. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    PubMed

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  4. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  5. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  6. Tensor network method for reversible classical computation

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  7. Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding

    NASA Astrophysics Data System (ADS)

    Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool

    2017-12-01

    In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.

  8. Optical authentication based on moiré effect of nonlinear gratings in phase space

    NASA Astrophysics Data System (ADS)

    Liao, Meihua; He, Wenqi; Wu, Jiachen; Lu, Dajiang; Liu, Xiaoli; Peng, Xiang

    2015-12-01

    An optical authentication scheme based on the moiré effect of nonlinear gratings in phase space is proposed. According to the phase function relationship of the moiré effect in phase space, an arbitrary authentication image can be encoded into two nonlinear gratings which serve as the authentication lock (AL) and the authentication key (AK). The AL is stored in the authentication system while the AK is assigned to the authorized user. The authentication procedure can be performed using an optoelectronic approach, while the design process is accomplished by a digital approach. Furthermore, this optical authentication scheme can be extended for multiple users with different security levels. The proposed scheme can not only verify the legality of a user identity, but can also discriminate and control the security levels of legal users. Theoretical analysis and simulation experiments are provided to verify the feasibility and effectiveness of the proposed scheme.

  9. A family of chaotic pure analog coding schemes based on baker's map function

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun

    2015-12-01

    This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

  10. Health financing for universal coverage and health system performance: concepts and implications for policy

    PubMed Central

    2013-01-01

    Abstract Unless the concept is clearly understood, “universal coverage” (or universal health coverage, UHC) can be used to justify practically any health financing reform or scheme. This paper unpacks the definition of health financing for universal coverage as used in the World Health Organization’s World health report 2010 to show how UHC embodies specific health system goals and intermediate objectives and, broadly, how health financing reforms can influence these. All countries seek to improve equity in the use of health services, service quality and financial protection for their populations. Hence, the pursuit of UHC is relevant to every country. Health financing policy is an integral part of efforts to move towards UHC, but for health financing policy to be aligned with the pursuit of UHC, health system reforms need to be aimed explicitly at improving coverage and the intermediate objectives linked to it, namely, efficiency, equity in health resource distribution and transparency and accountability. The unit of analysis for goals and objectives must be the population and health system as a whole. What matters is not how a particular financing scheme affects its individual members, but rather, how it influences progress towards UHC at the population level. Concern only with specific schemes is incompatible with a universal coverage approach and may even undermine UHC, particularly in terms of equity. Conversely, if a scheme is fully oriented towards system-level goals and objectives, it can further progress towards UHC. Policy and policy analysis need to shift from the scheme to the system level. PMID:23940408

  11. Health financing for universal coverage and health system performance: concepts and implications for policy.

    PubMed

    Kutzin, Joseph

    2013-08-01

    Unless the concept is clearly understood, "universal coverage" (or universal health coverage, UHC) can be used to justify practically any health financing reform or scheme. This paper unpacks the definition of health financing for universal coverage as used in the World Health Organization's World health report 2010 to show how UHC embodies specific health system goals and intermediate objectives and, broadly, how health financing reforms can influence these. All countries seek to improve equity in the use of health services, service quality and financial protection for their populations. Hence, the pursuit of UHC is relevant to every country. Health financing policy is an integral part of efforts to move towards UHC, but for health financing policy to be aligned with the pursuit of UHC, health system reforms need to be aimed explicitly at improving coverage and the intermediate objectives linked to it, namely, efficiency, equity in health resource distribution and transparency and accountability. The unit of analysis for goals and objectives must be the population and health system as a whole. What matters is not how a particular financing scheme affects its individual members, but rather, how it influences progress towards UHC at the population level. Concern only with specific schemes is incompatible with a universal coverage approach and may even undermine UHC, particularly in terms of equity. Conversely, if a scheme is fully oriented towards system-level goals and objectives, it can further progress towards UHC. Policy and policy analysis need to shift from the scheme to the system level.

  12. Quantum Key Recycling with 8-state encoding (The Quantum One-Time Pad is more interesting than we thought)

    NASA Astrophysics Data System (ADS)

    Škorić, Boris; de Vries, Manon

    Perfect encryption of quantum states using the Quantum One-Time Pad (QOTP) requires two classical key bits per qubit. Almost-perfect encryption, with information-theoretic security, requires only slightly more than 1. We slightly improve lower bounds on the key length. We show that key length n+2log1ɛ suffices to encrypt n qubits in such a way that the cipherstate’s L1-distance from uniformity is upperbounded by ɛ. For a stricter security definition involving the ∞-norm, we prove sufficient key length n+logn+2log1ɛ+1+1nlog1δ+logln21-ɛ, where δ is a small probability of failure. Our proof uses Pauli operators, whereas previous results on the ∞-norm needed Haar measure sampling. We show how to QOTP-encrypt classical plaintext in a nontrivial way: we encode a plaintext bit as the vector ±(1,1,1)/3 on the Bloch sphere. Applying the Pauli encryption operators results in eight possible cipherstates which are equally spread out on the Bloch sphere. This encoding, especially when combined with the half-keylength option of QOTP, has advantages over 4-state and 6-state encoding in applications such as Quantum Key Recycling (QKR) and Unclonable Encryption (UE). We propose a key recycling scheme that is more efficient and can tolerate more noise than a recent scheme by Fehr and Salvail. For 8-state QOTP encryption with pseudorandom keys, we do a statistical analysis of the cipherstate eigenvalues. We present numerics up to nine qubits.

  13. Universal scheme for finite-probability perfect transfer of arbitrary multispin states through spin chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn

    In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less

  14. What the "Overall" Doesn't Tell about World University Rankings: Examples from ARWU, QSWUR, and THEWUR in 2013

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2015-01-01

    In the various world university ranking schemes, the "Overall" is a sum of the weighted indicator scores. As the indicators are of a different nature from each other, "Overall" conceals important differences. Factor analysis of the data from three prominent ranking schemes reveals that there are two factors in each of the…

  15. Developing Assessment Policy and Evaluating Practice: A Case Study of the Introduction of A New Marking Scheme

    ERIC Educational Resources Information Center

    Handley, Fiona J. L.; Read, Ann

    2017-01-01

    In 2011, Southampton Solent University, a post-1992 university in southern England, introduced a new marking scheme with the aims of changing marking practice to achieve greater transparency and consistency in marking, and to ensure that the full range of marks was being awarded to students. This paper discusses the strategic background to the…

  16. Quantum Stabilizer Codes Can Realize Access Structures Impossible by Classical Secret Sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    We show a simple example of a secret sharing scheme encoding classical secret to quantum shares that can realize an access structure impossible by classical information processing with limitation on the size of each share. The example is based on quantum stabilizer codes.

  17. A Peer-Assisted Teaching Scheme to Improve Units with Critically Low Student Satisfaction: Opportunities and Challenges

    ERIC Educational Resources Information Center

    Carbone, Angela

    2014-01-01

    This paper outlines a peer-assisted teaching scheme (PATS) which was piloted in the Faculty of Information Technology at Monash University, Australia to address the low student satisfaction with the quality of information and communication technology units. Positive results from the pilot scheme led to a trial of the scheme in other disciplines.…

  18. Enriching User-Oriented Class Associations for Library Classification Schemes.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh; Yang, Chyan

    2003-01-01

    Explores the possibility of adding user-oriented class associations to hierarchical library classification schemes. Analyses a log of book circulation records from a university library in Taiwan and shows that classification schemes can be made more adaptable by analyzing circulation patterns of similar users. (Author/LRW)

  19. Compression of surface myoelectric signals using MP3 encoding.

    PubMed

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  20. Method for compression of data using single pass LZSS and run-length encoding

    DOEpatents

    Berlin, G.J.

    1994-01-01

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.

  1. Method for compression of data using single pass LZSS and run-length encoding

    DOEpatents

    Berlin, Gary J.

    1997-01-01

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.

  2. Parallel Processable Cryptographic Methods with Unbounded Practical Security.

    ERIC Educational Resources Information Center

    Rothstein, Jerome

    Addressing the problem of protecting confidential information and data stored in computer databases from access by unauthorized parties, this paper details coding schemes which present such astronomical work factors to potential code breakers that security breaches are hopeless in any practical sense. Two procedures which can be used to encode for…

  3. One-shot profile inspection for surfaces with depth, color and reflectivity discontinuities.

    PubMed

    Su, Wei-Hung; Chen, Sih-Yue

    2017-05-01

    A one-shot technique for surfaces with depth, color, and reflectivity discontinuities is presented. It uses windowed Fourier transform to extract the fringe phases and a binary-encoded scheme to unwrap the phases. Experiments show that absolute phases could be obtained with high reliability.

  4. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  5. A Classification Methodology and Retrieval Model to Support Software Reuse

    DTIC Science & Technology

    1988-01-01

    Dewey Decimal Classification ( DDC 18), an enumerative scheme, occupies 40 pages [Buchanan 19791. Langridge [19731 states that the facets listed in the...sense of historical importance or wide spread use. The schemes are: Dewey Decimal Classification ( DDC ), Universal Decimal Classification (UDC...Classification Systems ..... ..... 2.3.3 Library Classification__- .52 23.3.1 Dewey Decimal Classification -53 2.33.2 Universal Decimal Classification 55 2333

  6. On the security of a novel probabilistic signature based on bilinear square Diffie-Hellman problem and its extension.

    PubMed

    Zhao, Zhenguo; Shi, Wenbo

    2014-01-01

    Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications.

  7. Finding numbers in the brain.

    PubMed

    Gallistel, C R

    2017-02-19

    After listing functional constraints on what numbers in the brain must do, I sketch the two's complement fixed-point representation of numbers because it has stood the test of time and because it illustrates the non-obvious ways in which an effective coding scheme may operate. I briefly consider its neurobiological implementation. It is easier to imagine its implementation at the cell-intrinsic molecular level, with thermodynamically stable, volumetrically minimal polynucleotides encoding the remembered numbers, than at the circuit level, with plastic synapses encoding them.This article is part of a discussion meeting issue 'The origins of numerical abilities'. © 2017 The Author(s).

  8. Correspondence between quantization schemes for two-player nonzero-sum games and CNOT complexity

    NASA Astrophysics Data System (ADS)

    Vijayakrishnan, V.; Balakrishnan, S.

    2018-05-01

    The well-known quantization schemes for two-player nonzero-sum games are Eisert-Wilkens-Lewenstein scheme and Marinatto-Weber scheme. In this work, we establish the connection between the two schemes from the perspective of quantum circuits. Further, we provide the correspondence between any game quantization schemes and the CNOT complexity, where CNOT complexity is up to the local unitary operations. While CNOT complexity is known to be useful in the analysis of universal quantum circuit, in this work, we find its applicability in quantum game theory.

  9. Quality of experience enhancement of high efficiency video coding video streaming in wireless packet networks using multiple description coding

    NASA Astrophysics Data System (ADS)

    Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled

    2018-01-01

    Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.

  10. Teleportation-based quantum information processing with Majorana zero modes

    DOE PAGES

    Vijay, Sagar; Fu, Liang

    2016-12-29

    In this work, we present a measurement-based scheme for performing braiding operations on Majorana zero modes in mesoscopic superconductor islands and for detecting their non-Abelian statistics without moving or hybridizing them. In our scheme for “braiding without braiding”, the topological qubit encoded in any pair of well-separated Majorana zero modes is read out from the transmission phase shift in electron teleportation through the island in the Coulomb-blockade regime. Finally, we propose experimental setups to measure the teleportation phase shift via conductance in an electron interferometer or persistent current in a closed loop.

  11. Teleportation-based quantum information processing with Majorana zero modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijay, Sagar; Fu, Liang

    In this work, we present a measurement-based scheme for performing braiding operations on Majorana zero modes in mesoscopic superconductor islands and for detecting their non-Abelian statistics without moving or hybridizing them. In our scheme for “braiding without braiding”, the topological qubit encoded in any pair of well-separated Majorana zero modes is read out from the transmission phase shift in electron teleportation through the island in the Coulomb-blockade regime. Finally, we propose experimental setups to measure the teleportation phase shift via conductance in an electron interferometer or persistent current in a closed loop.

  12. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    PubMed

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  13. Comparative Genomic Analysis of Haemophilus haemolyticus and Nontypeable Haemophilus influenzae and a New Testing Scheme for Their Discrimination.

    PubMed

    Hu, Fang; Rishishwar, Lavanya; Sivadas, Ambily; Mitchell, Gabriel J; Jordan, I King; Murphy, Timothy F; Gilsdorf, Janet R; Mayer, Leonard W; Wang, Xin

    2016-12-01

    Haemophilus haemolyticus has been recently discovered to have the potential to cause invasive disease. It is closely related to nontypeable Haemophilus influenzae (NT H. influenzae). NT H. influenzae and H. haemolyticus are often misidentified because none of the existing tests targeting the known phenotypes of H. haemolyticus are able to specifically identify H. haemolyticus Through comparative genomic analysis of H. haemolyticus and NT H. influenzae, we identified genes unique to H. haemolyticus that can be used as targets for the identification of H. haemolyticus A real-time PCR targeting purT (encoding phosphoribosylglycinamide formyltransferase 2 in the purine synthesis pathway) was developed and evaluated. The lower limit of detection was 40 genomes/PCR; the sensitivity and specificity in detecting H. haemolyticus were 98.9% and 97%, respectively. To improve the discrimination of H. haemolyticus and NT H. influenzae, a testing scheme combining two targets (H. haemolyticus purT and H. influenzae hpd, encoding protein D lipoprotein) was also evaluated and showed 96.7% sensitivity and 98.2% specificity for the identification of H. haemolyticus and 92.8% sensitivity and 100% specificity for the identification of H. influenzae, respectively. The dual-target testing scheme can be used for the diagnosis and surveillance of infection and disease caused by H. haemolyticus and NT H. influenzae. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  14. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  15. Comparative Genomic Analysis of Haemophilus haemolyticus and Nontypeable Haemophilus influenzae and a New Testing Scheme for Their Discrimination

    PubMed Central

    Hu, Fang; Rishishwar, Lavanya; Sivadas, Ambily; Mitchell, Gabriel J.; Jordan, I. King; Murphy, Timothy F.; Gilsdorf, Janet R.; Mayer, Leonard W.

    2016-01-01

    Haemophilus haemolyticus has been recently discovered to have the potential to cause invasive disease. It is closely related to nontypeable Haemophilus influenzae (NT H. influenzae). NT H. influenzae and H. haemolyticus are often misidentified because none of the existing tests targeting the known phenotypes of H. haemolyticus are able to specifically identify H. haemolyticus. Through comparative genomic analysis of H. haemolyticus and NT H. influenzae, we identified genes unique to H. haemolyticus that can be used as targets for the identification of H. haemolyticus. A real-time PCR targeting purT (encoding phosphoribosylglycinamide formyltransferase 2 in the purine synthesis pathway) was developed and evaluated. The lower limit of detection was 40 genomes/PCR; the sensitivity and specificity in detecting H. haemolyticus were 98.9% and 97%, respectively. To improve the discrimination of H. haemolyticus and NT H. influenzae, a testing scheme combining two targets (H. haemolyticus purT and H. influenzae hpd, encoding protein D lipoprotein) was also evaluated and showed 96.7% sensitivity and 98.2% specificity for the identification of H. haemolyticus and 92.8% sensitivity and 100% specificity for the identification of H. influenzae, respectively. The dual-target testing scheme can be used for the diagnosis and surveillance of infection and disease caused by H. haemolyticus and NT H. influenzae. PMID:27707939

  16. Typing of Panton-Valentine Leukocidin-Encoding Phages and lukSF-PV Gene Sequence Variation in Staphylococcus aureus from China.

    PubMed

    Zhao, Huanqiang; Hu, Fupin; Jin, Shu; Xu, Xiaogang; Zou, Yuhan; Ding, Baixing; He, Chunyan; Gong, Fang; Liu, Qingzhong

    2016-01-01

    Panton-Valentine leukocidin (PVL, encoded by lukSF-PV genes), a bi-component and pore-forming toxin, is carried by different staphylococcal bacteriophages. The prevalence of PVL in Staphylococcus aureus has been reported around the globe. However, the data on PVL-encoding phage types, lukSF-PV gene variation and chromosomal phage insertion sites for PVL-positive S. aureus are limited, especially in China. In order to obtain a more complete understanding of the molecular epidemiology of PVL-positive S. aureus, an integrated and modified PCR-based scheme was applied to detect the PVL-encoding phage types. Phage insertion locus and the lukSF-PV variant were determined by PCR and sequencing. Meanwhile, the genetic background was characterized by staphylococcal cassette chromosome mec (SCCmec) typing, staphylococcal protein A (spa) gene polymorphisms typing, pulsed-field gel electrophoresis (PFGE) typing, accessory gene regulator (agr) locus typing and multilocus sequence typing (MLST). Seventy eight (78/1175, 6.6%) isolates possessed the lukSF-PV genes and 59.0% (46/78) of PVL-positive strains belonged to CC59 lineage. Eight known different PVL-encoding phage types were detected, and Φ7247PVL/ΦST5967PVL (n = 13) and ΦPVL (n = 12) were the most prevalent among them. While 25 (25/78, 32.1%) isolates, belonging to ST30, and ST59 clones, were unable to be typed by the modified PCR-based scheme. Single nucleotide polymorphisms (SNPs) were identified at five locations in the lukSF-PV genes, two of which were non-synonymous. Maximum-likelihood tree analysis of attachment sites sequences detected six SNP profiles for attR and eight for attL, respectively. In conclusion, the PVL-positive S. aureus mainly harbored Φ7247PVL/ΦST5967PVL and ΦPVL in the regions studied. lukSF-PV gene sequences, PVL-encoding phages, and phage insertion locus generally varied with lineages. Moreover, PVL-positive clones that have emerged worldwide likely carry distinct phages.

  17. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.

  18. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  19. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  20. Universal strategies for the DNA-encoding of libraries of small molecules using the chemical ligation of oligonucleotide tags

    PubMed Central

    Litovchick, Alexander; Clark, Matthew A; Keefe, Anthony D

    2014-01-01

    The affinity-mediated selection of large libraries of DNA-encoded small molecules is increasingly being used to initiate drug discovery programs. We present universal methods for the encoding of such libraries using the chemical ligation of oligonucleotides. These methods may be used to record the chemical history of individual library members during combinatorial synthesis processes. We demonstrate three different chemical ligation methods as examples of information recording processes (writing) for such libraries and two different cDNA-generation methods as examples of information retrieval processes (reading) from such libraries. The example writing methods include uncatalyzed and Cu(I)-catalyzed alkyne-azide cycloadditions and a novel photochemical thymidine-psoralen cycloaddition. The first reading method “relay primer-dependent bypass” utilizes a relay primer that hybridizes across a chemical ligation junction embedded in a fixed-sequence and is extended at its 3′-terminus prior to ligation to adjacent oligonucleotides. The second reading method “repeat-dependent bypass” utilizes chemical ligation junctions that are flanked by repeated sequences. The upstream repeat is copied prior to a rearrangement event during which the 3′-terminus of the cDNA hybridizes to the downstream repeat and polymerization continues. In principle these reading methods may be used with any ligation chemistry and offer universal strategies for the encoding (writing) and interpretation (reading) of DNA-encoded chemical libraries. PMID:25483841

  1. Can Student Populations in Developing Countries Be Reached by Online Surveys? The Case of the National Service Scheme Survey (N3S) in Ghana

    ERIC Educational Resources Information Center

    Langer, Arnim; Meuleman, Bart; Oshodi, Abdul-Gafar Tobi; Schroyens, Maarten

    2017-01-01

    This article tackles the question whether it is a viable strategy to conduct online surveys among university students in developing countries. By documenting the methodology of the National Service Scheme Survey conducted in Ghana, we set out to answer three questions: (1) How can a sample of university students be obtained? (2) How can students…

  2. 75 FR 39940 - Notice of Debarment; Schools and Libraries Universal Service Support Mechanism

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... months of supervised release for federal crimes in connection with your participation in a scheme to... that you and others devised schemes to defraud school districts and the E-Rate program by having your... in the schemes.\\8\\ Such conduct constitutes the basis for your debarment, and your conviction falls...

  3. Randomized central limit theorems: A unified theory

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  4. Binary image encryption in a joint transform correlator scheme by aid of run-length encoding and QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong

    2018-07-01

    We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.

  5. High-resolution, low-delay, and error-resilient medical ultrasound video communication using H.264/AVC over mobile WiMAX networks.

    PubMed

    Panayides, Andreas; Antoniou, Zinonas C; Mylonas, Yiannos; Pattichis, Marios S; Pitsillides, Andreas; Pattichis, Constantinos S

    2013-05-01

    In this study, we describe an effective video communication framework for the wireless transmission of H.264/AVC medical ultrasound video over mobile WiMAX networks. Medical ultrasound video is encoded using diagnostically-driven, error resilient encoding, where quantization levels are varied as a function of the diagnostic significance of each image region. We demonstrate how our proposed system allows for the transmission of high-resolution clinical video that is encoded at the clinical acquisition resolution and can then be decoded with low-delay. To validate performance, we perform OPNET simulations of mobile WiMAX Medium Access Control (MAC) and Physical (PHY) layers characteristics that include service prioritization classes, different modulation and coding schemes, fading channels conditions, and mobility. We encode the medical ultrasound videos at the 4CIF (704 × 576) resolution that can accommodate clinical acquisition that is typically performed at lower resolutions. Video quality assessment is based on both clinical (subjective) and objective evaluations.

  6. Towards Universal Health Coverage via Social Health Insurance in China: Systemic Fragmentation, Reform Imperatives, and Policy Alternatives.

    PubMed

    He, Alex Jingwei; Wu, Shaolong

    2017-12-01

    China's remarkable progress in building a comprehensive social health insurance (SHI) system was swift and impressive. Yet the country's decentralized and incremental approach towards universal coverage has created a fragmented SHI system under which a series of structural deficiencies have emerged with negative impacts. First, contingent on local conditions and financing capacity, benefit packages vary considerably across schemes, leading to systematic inequity. Second, the existence of multiple schemes, complicated by massive migration, has resulted in weak portability of SHI, creating further barriers to access. Third, many individuals are enrolled on multiple schemes, which causes inefficient use of government subsidies. Moral hazard and adverse selection are not effectively managed. The Chinese government announced its blueprint for integrating the urban and rural resident schemes in early 2016, paving the way for the ultimate consolidation of all SHI schemes and equal benefits for all. This article proposes three policy alternatives to inform the consolidation: (1) a single-pool system at the prefectural level with significant government subsidies, (2) a dual-pool system at the prefectural level with risk-equalization mechanisms, and (3) a household approach without merging existing pools. Vertical integration to the provincial level is unlikely to happen in the near future. Two caveats are raised to inform this transition towards universal health coverage.

  7. On the Security of a Novel Probabilistic Signature Based on Bilinear Square Diffie-Hellman Problem and Its Extension

    PubMed Central

    Zhao, Zhenguo; Shi, Wenbo

    2014-01-01

    Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications. PMID:25025083

  8. Optimal Weight Assignment for a Chinese Signature File.

    ERIC Educational Resources Information Center

    Liang, Tyne; And Others

    1996-01-01

    Investigates the performance of a character-based Chinese text retrieval scheme in which monogram keys and bigram keys are encoded into document signatures. Tests and verifies the theoretical predictions of the optimal weight assignments and the minimal false hit rate in experiments using a real Chinese corpus for disyllabic queries of different…

  9. CCC and the Fermi paradox

    NASA Astrophysics Data System (ADS)

    Gurzadyan, V. G.; Penrose, R.

    2016-01-01

    Within the scheme of conformal cyclic cosmology (CCC), information can be transmitted from aeon to aeon. Accordingly, the "Fermi paradox" and the SETI programme --of communication by remote civilizations-- may be examined from a novel perspective: such information could, in principle, be encoded in the cosmic microwave background. The current empirical status of CCC is also discussed.

  10. Three-dimensional image authentication scheme using sparse phase information in double random phase encoded integral imaging.

    PubMed

    Yi, Faliu; Jeoung, Yousun; Moon, Inkyu

    2017-05-20

    In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.

  11. Improved integral images compression based on multi-view extraction

    NASA Astrophysics Data System (ADS)

    Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric

    2016-09-01

    Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.

  12. Distinct neuronal coding schemes in memory revealed by selective erasure of fast synchronous synaptic transmission.

    PubMed

    Xu, Wei; Morishita, Wade; Buckmaster, Paul S; Pang, Zhiping P; Malenka, Robert C; Südhof, Thomas C

    2012-03-08

    Neurons encode information by firing spikes in isolation or bursts and propagate information by spike-triggered neurotransmitter release that initiates synaptic transmission. Isolated spikes trigger neurotransmitter release unreliably but with high temporal precision. In contrast, bursts of spikes trigger neurotransmission reliably (i.e., boost transmission fidelity), but the resulting synaptic responses are temporally imprecise. However, the relative physiological importance of different spike-firing modes remains unclear. Here, we show that knockdown of synaptotagmin-1, the major Ca(2+) sensor for neurotransmitter release, abrogated neurotransmission evoked by isolated spikes but only delayed, without abolishing, neurotransmission evoked by bursts of spikes. Nevertheless, knockdown of synaptotagmin-1 in the hippocampal CA1 region did not impede acquisition of recent contextual fear memories, although it did impair the precision of such memories. In contrast, knockdown of synaptotagmin-1 in the prefrontal cortex impaired all remote fear memories. These results indicate that different brain circuits and types of memory employ distinct spike-coding schemes to encode and transmit information. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. A disclosure scheme for protecting the victims of domestic violence.

    PubMed

    Griffith, Richard

    2017-06-08

    Richard Griffith, Senior Lecturer in Health Law at Swansea University, explains how the Domestic Violence Disclosure Scheme aims to protect potential victims by allowing disclosure of a partner's previous crimes.

  14. Pattern recognition of electronic bit-sequences using a semiconductor mode-locked laser and spatial light modulators

    NASA Astrophysics Data System (ADS)

    Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.

    2010-04-01

    A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.

  15. High-dimensional free-space optical communications based on orbital angular momentum coding

    NASA Astrophysics Data System (ADS)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  16. Low-peak-to-average power ratio and low-complexity asymmetrically clipped optical orthogonal frequency-division multiplexing uplink transmission scheme for long-reach passive optical network.

    PubMed

    Zhou, Ji; Qiao, Yaojun

    2015-09-01

    In this Letter, we propose a discrete Hartley transform (DHT)-spread asymmetrically clipped optical orthogonal frequency-division multiplexing (DHT-S-ACO-OFDM) uplink transmission scheme in which the multiplexing/demultiplexing process also uses the DHT algorithm. By designing a simple encoding structure, the computational complexity of the transmitter can be reduced from O(Nlog(2)(N)) to O(N). At the probability of 10(-3), the peak-to-average power ratio (PAPR) of 2-ary pulse amplitude modulation (2-PAM)-modulated DHT-S-ACO-OFDM is approximately 9.7 dB lower than that of 2-PAM-modulated conventional ACO-OFDM. To verify the feasibility of the proposed scheme, a 4-Gbit/s DHT-S-ACO-OFDM uplink transmission scheme with a 1∶64 way split has been experimentally implemented using 100-km standard single-mode fiber (SSMF) for a long-reach passive optical network (LR-PON).

  17. Accreditation the Education Development Centers of Medical-Sciences Universities: Another Step toward Quality Improvement in Education

    PubMed Central

    Haghdoost, AA; Momtazmanesh, N; Shoghi, F; Mohagheghi, M; Mehrolhassani, MH

    2013-01-01

    Background: In order to improve the quality of education in universities of medical sciences (UMS), and because of the key role of education development centers (EDCs), an accreditation scheme was developed to evaluate their performance. Method: A group of experts in the medical education field was selected based on pre-defined criteria by EDC of Ministry of Health and Medical education. The team, worked intensively for 6 months to develop a list of essential standards to assess the performance of EDCs. Having checked for the content validity of standards, clear and measurable indicators were created via consensus. Then, required information were collected from UMS EDCs; the first round of accreditation was carried out just to check the acceptability of this scheme, and make force universities to prepare themselves for the next factual round of accreditation. Results: Five standards domains were developed as the conceptual framework for defining main categories of indicators. This included: governing and leadership, educational planning, faculty development, assessment and examination and research in education. Nearly all of UMS filled all required data forms precisely with minimum confusion which shows the practicality of this accreditation scheme. Conclusion: It seems that the UMS have enough interest to provide required information for this accreditation scheme. However, in order to receive promising results, most of universities have to work intensively in order to prepare minimum levels in all required standards. However, it seems that in long term, implementation of a valid accreditation scheme plays an important role in improvement of the quality of medical education around the country. PMID:23865031

  18. Accreditation the Education Development Centers of Medical-Sciences Universities: Another Step toward Quality Improvement in Education.

    PubMed

    Haghdoost, Aa; Momtazmanesh, N; Shoghi, F; Mohagheghi, M; Mehrolhassani, Mh

    2013-01-01

    In order to improve the quality of education in universities of medical sciences (UMS), and because of the key role of education development centers (EDCs), an accreditation scheme was developed to evaluate their performance. A group of experts in the medical education field was selected based on pre-defined criteria by EDC of Ministry of Health and Medical education. The team, worked intensively for 6 months to develop a list of essential standards to assess the performance of EDCs. Having checked for the content validity of standards, clear and measurable indicators were created via consensus. Then, required information were collected from UMS EDCs; the first round of accreditation was carried out just to check the acceptability of this scheme, and make force universities to prepare themselves for the next factual round of accreditation. Five standards domains were developed as the conceptual framework for defining main categories of indicators. This included: governing and leadership, educational planning, faculty development, assessment and examination and research in education. Nearly all of UMS filled all required data forms precisely with minimum confusion which shows the practicality of this accreditation scheme. It seems that the UMS have enough interest to provide required information for this accreditation scheme. However, in order to receive promising results, most of universities have to work intensively in order to prepare minimum levels in all required standards. However, it seems that in long term, implementation of a valid accreditation scheme plays an important role in improvement of the quality of medical education around the country.

  19. 75 FR 3732 - Notice of Suspension and Initiation of Debarment Proceedings; Schools and Libraries Universal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-22

    ... defrauding the E-Rate Program.\\6\\ You admitted that you and others devised a scheme to defraud school... Technologies, Inc.\\7\\ In furtherance of the scheme, you submitted fraudulent and false documents to the... providers in violation of E-Rate Program rules.\\8\\ Ultimately, your scheme induced at least ten schools, in...

  20. The Effect of a Monitoring Scheme on Tutorial Attendance and Assignment Submission

    ERIC Educational Resources Information Center

    Burke, Grainne; Mac an Bhaird, Ciaran; O'Shea, Ann

    2013-01-01

    We report on the implementation of a monitoring scheme by the Department of Mathematics and Statistics at the National University of Ireland Maynooth. The scheme was introduced in an attempt to increase the level and quality of students' engagement with certain aspects of their undergraduate course. It is well documented that students with higher…

  1. Peer Learning Strategies: Acknowledging Lecturers' Concerns of the Student Learning Assistant Scheme on a New Higher Education Campus

    ERIC Educational Resources Information Center

    Kodabux, Adeelah; Hoolash, Bheshaj Kumar Ashley

    2015-01-01

    The Student Learning Assistant (SLA) scheme was introduced in 2010 at Middlesex University Mauritius Branch Campus (MUMBC). The scheme is similar to traditional peer learning strategies, such as Peer Assisted Learning (PAL) and Peer Assisted Study Sessions (PASS), which are widely operated in higher education environments to motivate student…

  2. 75 FR 20846 - Notice of Suspension and Initiation of Debarment Proceedings; Schools and Libraries Universal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ... admitted that you and others devised schemes to defraud school districts and the E-Rate program by having..., your conspirators, and your company, primarily DeltaNet, Inc.\\6\\ In furtherance of the schemes, you...\\ Ultimately, your conspiracy was comprised of two closely related schemes that affected at least thirteen...

  3. 75 FR 20844 - Notice of Suspension and Initiation of Debarment Proceedings; Schools and Libraries Universal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ... schemes to defraud school districts and the E-Rate program by having your co-conspirators steer E-rate..., primarily DeltaNet, Inc.\\6\\ In furtherance of the schemes, you submitted misleading, fraudulent and false... comprised of two closely related schemes that affected at least thirteen different schools in eight...

  4. Scalable quantum information processing with atomic ensembles and flying photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei Feng; Yu Yafei; Feng Mang

    2009-10-15

    We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could muchmore » relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.« less

  5. Toward functional classification of neuronal types.

    PubMed

    Sharpee, Tatyana O

    2014-09-17

    How many types of neurons are there in the brain? This basic neuroscience question remains unsettled despite many decades of research. Classification schemes have been proposed based on anatomical, electrophysiological, or molecular properties. However, different schemes do not always agree with each other. This raises the question of whether one can classify neurons based on their function directly. For example, among sensory neurons, can a classification scheme be devised that is based on their role in encoding sensory stimuli? Here, theoretical arguments are outlined for how this can be achieved using information theory by looking at optimal numbers of cell types and paying attention to two key properties: correlations between inputs and noise in neural responses. This theoretical framework could help to map the hierarchical tree relating different neuronal classes within and across species. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Graph State-Based Quantum Group Authentication Scheme

    NASA Astrophysics Data System (ADS)

    Liao, Longxia; Peng, Xiaoqi; Shi, Jinjing; Guo, Ying

    2017-02-01

    Motivated by the elegant structure of the graph state, we design an ingenious quantum group authentication scheme, which is implemented by operating appropriate operations on the graph state and can solve the problem of multi-user authentication. Three entities, the group authentication server (GAS) as a verifier, multiple users as provers and the trusted third party Trent are included. GAS and Trent assist the multiple users in completing the authentication process, i.e., GAS is responsible for registering all the users while Trent prepares graph states. All the users, who request for authentication, encode their authentication keys on to the graph state by performing Pauli operators. It demonstrates that a novel authentication scheme can be achieved with the flexible use of graph state, which can synchronously authenticate a large number of users, meanwhile the provable security can be guaranteed definitely.

  7. A chaotic secure communication scheme using fractional chaotic systems based on an extended fractional Kalman filter

    NASA Astrophysics Data System (ADS)

    Kiani-B, Arman; Fallahi, Kia; Pariz, Naser; Leung, Henry

    2009-03-01

    In recent years chaotic secure communication and chaos synchronization have received ever increasing attention. In this paper, for the first time, a fractional chaotic communication method using an extended fractional Kalman filter is presented. The chaotic synchronization is implemented by the EFKF design in the presence of channel additive noise and processing noise. Encoding chaotic communication achieves a satisfactory, typical secure communication scheme. In the proposed system, security is enhanced based on spreading the signal in frequency and encrypting it in time domain. In this paper, the main advantages of using fractional order systems, increasing nonlinearity and spreading the power spectrum are highlighted. To illustrate the effectiveness of the proposed scheme, a numerical example based on the fractional Lorenz dynamical system is presented and the results are compared to the integer Lorenz system.

  8. Hiding Techniques for Dynamic Encryption Text based on Corner Point

    NASA Astrophysics Data System (ADS)

    Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna

    2018-05-01

    Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.

  9. Secure multiparty computation of a comparison problem.

    PubMed

    Liu, Xin; Li, Shundong; Liu, Jian; Chen, Xiubo; Xu, Gang

    2016-01-01

    Private comparison is fundamental to secure multiparty computation. In this study, we propose novel protocols to privately determine [Formula: see text], or [Formula: see text] in one execution. First, a 0-1-vector encoding method is introduced to encode a number into a vector, and the Goldwasser-Micali encryption scheme is used to compare integers privately. Then, we propose a protocol by using a geometric method to compare rational numbers privately, and the protocol is information-theoretical secure. Using the simulation paradigm, we prove the privacy-preserving property of our protocols in the semi-honest model. The complexity analysis shows that our protocols are more efficient than previous solutions.

  10. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  11. Method for compression of data using single pass LZSS and run-length encoding

    DOEpatents

    Berlin, G.J.

    1997-12-23

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data is disclosed. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer. 3 figs.

  12. Detecting weak position fluctuations from encoder signal using singular spectrum analysis.

    PubMed

    Xu, Xiaoqiang; Zhao, Ming; Lin, Jing

    2017-11-01

    Mechanical fault or defect will cause some weak fluctuations to the position signal. Detection of such fluctuations via encoders can help determine the health condition and performance of the machine, and offer a promising alternative to the vibration-based monitoring scheme. However, besides the interested fluctuations, encoder signal also contains a large trend and some measurement noise. In applications, the trend is normally several orders larger than the concerned fluctuations in magnitude, which makes it difficult to detect the weak fluctuations without signal distortion. In addition, the fluctuations can be complicated and amplitude modulated under non-stationary working condition. To overcome this issue, singular spectrum analysis (SSA) is proposed for detecting weak position fluctuations from encoder signal in this paper. It enables complicated encode signal to be reduced into several interpretable components including a trend, a set of periodic fluctuations and noise. A numerical simulation is given to demonstrate the performance of the method, it shows that SSA outperforms empirical mode decomposition (EMD) in terms of capability and accuracy. Moreover, linear encoder signals from a CNC machine tool are analyzed to determine the magnitudes and sources of fluctuations during feed motion. The proposed method is proven to be feasible and reliable for machinery condition monitoring. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Multi-photon self-error-correction hyperentanglement distribution over arbitrary collective-noise channels

    NASA Astrophysics Data System (ADS)

    Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo

    2017-01-01

    We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.

  14. Engineering high-order nonlinear dissipation for quantum superconducting circuits

    NASA Astrophysics Data System (ADS)

    Mundhada, S. O.; Grimm, A.; Touzard, S.; Shankar, S.; Minev, Z. K.; Vool, U.; Mirrahimi, M.; Devoret, M. H.

    Engineering nonlinear driven-dissipative processes is essential for quantum control. In the case of a harmonic oscillator, nonlinear dissipation can stabilize a decoherence-free manifold, leading to protected quantum information encoding. One possible approach to implement such nonlinear interactions is to combine the nonlinearities provided by Josephson circuits with parametric pump drives. However, it is usually hard to achieve strong nonlinearities while avoiding undesired couplings. Here we propose a scheme to engineer a four-photon drive and dissipation in a harmonic oscillator by cascading experimentally demonstrated two-photon processes. We also report experimental progress towards realization of such a scheme. Work supported by: ARO, ONR, AFOSR and YINQE.

  15. Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication

    NASA Astrophysics Data System (ADS)

    Niaz, M. T.; Imdad, F.; Kim, H. S.

    2017-01-01

    An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.

  16. Precision spectral manipulation of optical pulses using a coherent photon echo memory.

    PubMed

    Buchler, B C; Hosseini, M; Hétet, G; Sparkes, B M; Lam, P K

    2010-04-01

    Photon echo schemes are excellent candidates for high efficiency coherent optical memory. They are capable of high-bandwidth multipulse storage, pulse resequencing and have been shown theoretically to be compatible with quantum information applications. One particular photon echo scheme is the gradient echo memory (GEM). In this system, an atomic frequency gradient is induced in the direction of light propagation leading to a Fourier decomposition of the optical spectrum along the length of the storage medium. This Fourier encoding allows precision spectral manipulation of the stored light. In this Letter, we show frequency shifting, spectral compression, spectral splitting, and fine dispersion control of optical pulses using GEM.

  17. Biometrics based key management of double random phase encoding scheme using error control codes

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2013-08-01

    In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.

  18. Simultaneous multiplexing and encoding of multiple images based on a double random phase encryption system

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Mansour, Ali

    2009-09-01

    Nowadays, protecting information is a major issue in any transmission system, as showed by an increasing number of research papers related to this topic. Optical encoding methods, such as a Double Random Phase encryption system i.e. DRP, are widely used and cited in the literature. DRP systems have very simple principle and they are easily applicable to most images (B&W, gray levels or color). Moreover, some applications require an enhanced encoding level based on multiencryption scheme and including biometric keys (as digital fingerprints). The enhancement should be done without increasing transmitted or stored information. In order to achieve that goal, a new approach for simultaneous multiplexing & encoding of several target images is developed in this manuscript. By introducing two additional security levels, our approach enhances the security level of a classic "DRP" system. Our first security level consists in using several independent image-keys (randomly and structurally) along with a new multiplexing algorithm. At this level, several target images (multiencryption) are used. This part can reduce needed information (encoding information). At the second level a standard DRP system is included. Finally, our approach can detect if any vandalism attempt has been done on transmitted encrypted images.

  19. Optical image encryption method based on incoherent imaging and polarized light encoding

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  20. Information Literacy Advocates: developing student skills through a peer support approach.

    PubMed

    Curtis, Ruth

    2016-12-01

    Since 2013/2014, an Information Literacy Advocates (ILA) scheme has been running at the University of Nottingham as an extracurricular module on the Nottingham Advantage Award programme. The Information Literacy Advocates scheme, which recruits medicine and health sciences students in their second year or above, aims to facilitate development of information literacy skills and confidence, as well as communication, organisation and teamwork, through the provision of peer support. Previous research indicates peer assistance effectively enhances such skills and is valued by fellow students who welcome the opportunity to approach more experienced students for help. This article, written by guest writer Ruth Curtis from the University of Nottingham, provides an overview of administering the ILA scheme and explores its impact on the Information Literacy Advocates, peers and librarians, and discusses future developments for taking the scheme forward. H. S. © 2016 Health Libraries Group.

  1. An evaluation of a Books on Prescription scheme in a UK public library authority.

    PubMed

    Furness, Rebecca; Casselden, Biddy

    2012-12-01

    This article discusses an evaluation of a Books on Prescription (BOP) scheme in a UK public library authority. The research was carried out by Rebecca Furness and submitted as a dissertation for the MSc Information and Library Management to Northumbria University. The dissertation was supervised by Biddy Casselden at Northumbria University and was awarded a distinction. The dissertation identified areas for development for BOP schemes and made specific recommendations that could make the schemes more accessible, enabling significant numbers of people to lead more fulfilling lives. Because this study focuses on mental health and the role that UK public libraries have in supporting well-being, it is a good illustration of the wide-ranging nature of subjects welcomed for the Dissertations into practice feature. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.

  2. 75 FR 39943 - Notice of Debarment; Schools and Libraries Universal Service Support Mechanism

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... release for federal crimes in connection with your participation in a scheme to defraud the E-Rate program... devised schemes to defraud school districts and the E-Rate program by having your co-conspirators steer E... company.\\7\\ You were also ordered to pay $271,716 in restitution to USAC for your role in the schemes.\\8...

  3. Low-power priority Address-Encoder and Reset-Decoder data-driven readout for Monolithic Active Pixel Sensors for tracker system

    NASA Astrophysics Data System (ADS)

    Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.

    2015-06-01

    Active Pixel Sensors used in High Energy Particle Physics require low power consumption to reduce the detector material budget, low integration time to reduce the possibilities of pile-up and fast readout to improve the detector data capability. To satisfy these requirements, a novel Address-Encoder and Reset-Decoder (AERD) asynchronous circuit for a fast readout of a pixel matrix has been developed. The AERD data-driven readout architecture operates the address encoding and reset decoding based on an arbitration tree, and allows us to readout only the hit pixels. Compared to the traditional readout structure of the rolling shutter scheme in Monolithic Active Pixel Sensors (MAPS), AERD can achieve a low readout time and a low power consumption especially for low hit occupancies. The readout is controlled at the chip periphery with a signal synchronous with the clock, allows a good digital and analogue signal separation in the matrix and a reduction of the power consumption. The AERD circuit has been implemented in the TowerJazz 180 nm CMOS Imaging Sensor (CIS) process with full complementary CMOS logic in the pixel. It works at 10 MHz with a matrix height of 15 mm. The energy consumed to read out one pixel is around 72 pJ. A scheme to boost the readout speed to 40 MHz is also discussed. The sensor chip equipped with AERD has been produced and characterised. Test results including electrical beam measurement are presented.

  4. A Lactuca universal hybridizer, and its use in creation of fertile interspecific somatic hybrids.

    PubMed

    Chupeau, M C; Maisonneuve, B; Bellec, Y; Chupeau, Y

    1994-10-28

    A Lactuca sativa cv. Ardente line heterozygous for a gene encoding resistance to kanamycin, a positive and dominant trait, was crossed with cv. Girelle, which is heterozygous for a recessive albinism marker. The resulting seeds yielded 25% albino seedlings, of which 50% were also resistant to kanamycin. Such plantlets (KR, a) grown in vitro were used for preparation of universal hybridizer protoplasts, since green buds that can develop on kanamycin containing-medium should result from fusion with any wild-type protoplast. To test the practicability of this selection scheme, we fused L. sativa KR, a protoplasts with protoplasts derived from various wild Lactuca as well as various other related species. Protoplast-derived cell colonies were selected for resistance to kanamycin at the regeneration stage. Green buds were regenerated after fusion with protoplasts of L. tatarica and of L. perennis. So far, 9 interspecific hybrid plants have been characterized morphologically. In addition, random amplified polymorphic DNA (RAPD) analysis with selected primers confirmed that these plants are indeed interspecific hybrids. Some plants are female-fertile and production of backcross progenies with L. sativa is in progress. Since many desirable traits such as resistances to viruses, bacteria and fungi (Bremia lactucae) have been characterized in wild Lactuca species, the use of somatic hybridization in breeding programmes now appears a practical possibility.

  5. Moving towards universal coverage in South Africa? Lessons from a voluntary government insurance scheme.

    PubMed

    Govender, Veloshnee; Chersich, Matthew F; Harris, Bronwyn; Alaba, Olufunke; Ataguba, John E; Nxumalo, Nonhlanhla; Goudge, Jane

    2013-01-24

    In 2005, the South African government introduced a voluntary, subsidised health insurance scheme for civil servants. In light of the global emphasis on universal coverage, empirical evidence is needed to understand the relationship between new health financing strategies and health care access thereby improving global understanding of these issues. This study analysed coverage of the South African government health insurance scheme, the population groups with low uptake, and the individual-level factors, as well as characteristics of the scheme, that influenced enrolment. Multi-stage random sampling was used to select 1,329 civil servants from the health and education sectors in four of South Africa's nine provinces. They were interviewed to determine factors associated with enrolment in the scheme. The analysis included both descriptive statistics and multivariate logistic regression. Notwithstanding the availability of a non-contributory option within the insurance scheme and access to privately-provided primary care, a considerable portion of socio-economically vulnerable groups remained uninsured (57.7% of the lowest salary category). Non-insurance was highest among men, black African or coloured ethnic groups, less educated and lower-income employees, and those living in informal-housing. The relatively poor uptake of the contributory and non-contributory insurance options was mostly attributed to insufficient information, perceived administrative challenges of taking up membership, and payment costs. Barriers to enrolment include insufficient information, unaffordability of payments and perceived administrative complexity. Achieving universal coverage requires good physical access to service providers and appropriate benefit options within pre-payment health financing mechanisms.

  6. Moving towards universal coverage in South Africa? Lessons from a voluntary government insurance scheme

    PubMed Central

    Govender, Veloshnee; Chersich, Matthew F.; Harris, Bronwyn; Alaba, Olufunke; Ataguba, John E.; Nxumalo, Nonhlanhla; Goudge, Jane

    2013-01-01

    Background In 2005, the South African government introduced a voluntary, subsidised health insurance scheme for civil servants. In light of the global emphasis on universal coverage, empirical evidence is needed to understand the relationship between new health financing strategies and health care access thereby improving global understanding of these issues. Objectives This study analysed coverage of the South African government health insurance scheme, the population groups with low uptake, and the individual-level factors, as well as characteristics of the scheme, that influenced enrolment. Methods Multi-stage random sampling was used to select 1,329 civil servants from the health and education sectors in four of South Africa's nine provinces. They were interviewed to determine factors associated with enrolment in the scheme. The analysis included both descriptive statistics and multivariate logistic regression. Results Notwithstanding the availability of a non-contributory option within the insurance scheme and access to privately-provided primary care, a considerable portion of socio-economically vulnerable groups remained uninsured (57.7% of the lowest salary category). Non-insurance was highest among men, black African or coloured ethnic groups, less educated and lower-income employees, and those living in informal-housing. The relatively poor uptake of the contributory and non-contributory insurance options was mostly attributed to insufficient information, perceived administrative challenges of taking up membership, and payment costs. Conclusion Barriers to enrolment include insufficient information, unaffordability of payments and perceived administrative complexity. Achieving universal coverage requires good physical access to service providers and appropriate benefit options within pre-payment health financing mechanisms. PMID:23364093

  7. Performance Analysis of a New Coded TH-CDMA Scheme in Dispersive Infrared Channel with Additive Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Hamdi, Mazda; Kenari, Masoumeh Nasiri

    2013-06-01

    We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.

  8. Key management and encryption under the bounded storage model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.

    2005-11-01

    There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channelmore » using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.« less

  9. Amplitude-Phase Modulation, Topological Horseshoe and Scaling Attractor of a Dynamical System

    NASA Astrophysics Data System (ADS)

    Li, Chun-Lai; Li, Wen; Zhang, Jing; Xie, Yuan-Xi; Zhao, Yi-Bo

    2016-09-01

    A three-dimensional autonomous chaotic system is discussed in this paper. Some basic dynamical properties of the system, including phase portrait, Poincaré map, power spectrum, Kaplan-Yorke dimension, Lyapunov exponent spectra, signal amplitude and topological horseshoe are studied theoretically and numerically. The main finding by analysis is that the signal amplitude can be modulated via controlling the coefficients of the linear term, cross-product term and squared term simultaneously or respectively, and the phase of x3 can be modulated by the product of the coefficients of the linear term and cross-product term. Furthermore, scaling chaotic attractors of this system are achieved by modified projective synchronization with an optimization-based linear coupling method, which is safer for secure communications than the existed synchronization scheme since the scaling factors can be regarded as the security encoding key. Supported by Hunan Provincial Natural Science Foundation of China under Grant No. 2016JJ4036, University Natural Science Foundation of Jiangsu Province under Grant No. 14KJB120007 and the National Natural Science Foundation of China under Grant Nos. 11504176 and 11602084

  10. Quantum information processing by weaving quantum Talbot carpets

    NASA Astrophysics Data System (ADS)

    Farías, Osvaldo Jiménez; de Melo, Fernando; Milman, Pérola; Walborn, Stephen P.

    2015-06-01

    Single-photon interference due to passage through a periodic grating is considered in a novel proposal for processing D -dimensional quantum systems (quDits) encoded in the spatial degrees of freedom of light. We show that free-space propagation naturally implements basic single-quDit gates by means of the Talbot effect: an intricate time-space carpet of light in the near-field diffraction regime. By adding a diagonal phase gate, we show that a complete set of single-quDit gates can be implemented. We then introduce a spatially dependent beam splitter that allows for projective measurements in the computational basis and can be used for the implementation of controlled operations between two quDits. Universal quantum information processing can then be implemented with linear optics and ancilla photons via postselection and feed-forward following the original proposal of Knill-Laflamme and Milburn. Although we consider photons, our scheme should be directly applicable to a number of other physical systems. Interpretation of the Talbot effect as a quantum logic operation provides a beautiful and interesting way to visualize quantum computation through wave propagation and interference.

  11. Quantum key distribution using basis encoding of Gaussian-modulated coherent states

    NASA Astrophysics Data System (ADS)

    Huang, Peng; Huang, Jingzheng; Zhang, Zheshen; Zeng, Guihua

    2018-04-01

    The continuous-variable quantum key distribution (CVQKD) has been demonstrated to be available in practical secure quantum cryptography. However, its performance is restricted strongly by the channel excess noise and the reconciliation efficiency. In this paper, we present a quantum key distribution (QKD) protocol by encoding the secret keys on the random choices of two measurement bases: the conjugate quadratures X and P . The employed encoding method can dramatically weaken the effects of channel excess noise and reconciliation efficiency on the performance of the QKD protocol. Subsequently, the proposed scheme exhibits the capability to tolerate much higher excess noise and enables us to reach a much longer secure transmission distance even at lower reconciliation efficiency. The proposal can work alternatively to strengthen significantly the performance of the known Gaussian-modulated CVQKD protocol and serve as a multiplier for practical secure quantum cryptography with continuous variables.

  12. Storing data encoded DNA in living organisms

    DOEpatents

    Wong,; Pak C. , Wong; Kwong K. , Foote; Harlan, P [Richland, WA

    2006-06-06

    Current technologies allow the generation of artificial DNA molecules and/or the ability to alter the DNA sequences of existing DNA molecules. With a careful coding scheme and arrangement, it is possible to encode important information as an artificial DNA strand and store it in a living host safely and permanently. This inventive technology can be used to identify origins and protect R&D investments. It can also be used in environmental research to track generations of organisms and observe the ecological impact of pollutants. Today, there are microorganisms that can survive under extreme conditions. As well, it is advantageous to consider multicellular organisms as hosts for stored information. These living organisms can provide as memory housing and protection for stored data or information. The present invention provides well for data storage in a living organism wherein at least one DNA sequence is encoded to represent data and incorporated into a living organism.

  13. Coherent-state constellations and polar codes for thermal Gaussian channels

    NASA Astrophysics Data System (ADS)

    Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.

    2017-06-01

    Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.

  14. Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai

    2018-03-01

    Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.

  15. Knowledge-based changes to health systems: the Thai experience in policy development.

    PubMed

    Tangcharoensathien, Viroj; Wibulpholprasert, Suwit; Nitayaramphong, Sanguan

    2004-10-01

    Over the past two decades the government in Thailand has adopted an incremental approach to extending health-care coverage to the population. It first offered coverage to government employees and their dependents, and then introduced a scheme under which low-income people were exempt from charges for health care. This scheme was later extended to include elderly people, children younger than 12 years of age and disabled people. A voluntary public insurance scheme was implemented to cover those who could afford to pay for their own care. Private sector employees were covered by the Social Health Insurance scheme, which was implemented in 1991. Despite these efforts, 30% of the population remained uninsured in 2001. In October of that year, the new government decided to embark on a programme to provide universal health-care coverage. This paper describes how research into health systems and health policy contributed to the move towards universal coverage. Data on health systems financing and functioning had been gathered before and after the founding of the Health Systems Research Institute in early 1990. In 1991, a contract capitation model had been used to launch the Social Health Insurance scheme. The advantages of using a capitation model are that it contains costs and provides an acceptable quality of service as opposed to the cost escalation and inefficiency that occur under fee-for-service reimbursement models, such as the one used to provide medical benefits to civil servants. An analysis of the implementation of universal coverage found that politics moved universal coverage onto the policy agenda during the general election campaign in January 2001. The capacity for research on health systems and policy to generate evidence guided the development of the policy and the design of the system at a later stage. Because the reformists who sought to bring about universal coverage (who were mostly civil servants in the Ministry of Public Health and members of nongovernmental organizations) were able to bridge the gap between researchers and politicians, an evidence-based political decision was made. Additionally, the media played a part in shaping the societal consensus on universal coverage.

  16. Knowledge-based changes to health systems: the Thai experience in policy development.

    PubMed Central

    Tangcharoensathien, Viroj; Wibulpholprasert, Suwit; Nitayaramphong, Sanguan

    2004-01-01

    Over the past two decades the government in Thailand has adopted an incremental approach to extending health-care coverage to the population. It first offered coverage to government employees and their dependents, and then introduced a scheme under which low-income people were exempt from charges for health care. This scheme was later extended to include elderly people, children younger than 12 years of age and disabled people. A voluntary public insurance scheme was implemented to cover those who could afford to pay for their own care. Private sector employees were covered by the Social Health Insurance scheme, which was implemented in 1991. Despite these efforts, 30% of the population remained uninsured in 2001. In October of that year, the new government decided to embark on a programme to provide universal health-care coverage. This paper describes how research into health systems and health policy contributed to the move towards universal coverage. Data on health systems financing and functioning had been gathered before and after the founding of the Health Systems Research Institute in early 1990. In 1991, a contract capitation model had been used to launch the Social Health Insurance scheme. The advantages of using a capitation model are that it contains costs and provides an acceptable quality of service as opposed to the cost escalation and inefficiency that occur under fee-for-service reimbursement models, such as the one used to provide medical benefits to civil servants. An analysis of the implementation of universal coverage found that politics moved universal coverage onto the policy agenda during the general election campaign in January 2001. The capacity for research on health systems and policy to generate evidence guided the development of the policy and the design of the system at a later stage. Because the reformists who sought to bring about universal coverage (who were mostly civil servants in the Ministry of Public Health and members of nongovernmental organizations) were able to bridge the gap between researchers and politicians, an evidence-based political decision was made. Additionally, the media played a part in shaping the societal consensus on universal coverage. PMID:15643796

  17. Mathematical model of blasting schemes management in mining operations in presence of random disturbances

    NASA Astrophysics Data System (ADS)

    Kazakova, E. I.; Medvedev, A. N.; Kolomytseva, A. O.; Demina, M. I.

    2017-11-01

    The paper presents a mathematical model of blasting schemes management in presence of random disturbances. Based on the lemmas and theorems proved, a control functional is formulated, which is stable. A universal classification of blasting schemes is developed. The main classification attributes are suggested: the orientation in plan the charging wells rows relatively the block of rocks; the presence of cuts in the blasting schemes; the separation of the wells series onto elements; the sequence of the blasting. The periodic regularity of transition from one Short-delayed scheme of blasting to another is proved.

  18. University role in astronaut life support systems: Space power supply systems

    NASA Technical Reports Server (NTRS)

    Chin, L. Y.

    1972-01-01

    A brief description is given of the schemes suggested for providing electrical power aboard spacecraft. An attempt was made to list the major advantages and disadvantages of each scheme and to identify the particular areas in which it appears likely further research work may prove fruitful. Special attention was given to those research areas that appear suitable for colleges and university work. Chemical, solar, and nuclear power sources are considered together with the appropriate thermoelectric and turboelectric conversion systems.

  19. Sensor-less pseudo-sinusoidal drive for a permanent-magnet brushless ac motor

    NASA Astrophysics Data System (ADS)

    Liu, Li-Hsiang; Chern, Tzuen-Lih; Pan, Ping-Lung; Huang, Tsung-Mou; Tsay, Der-Min; Kuang, Jao-Hwa

    2012-04-01

    The precise rotor-position information is required for a permanent-magnet brushless ac motor (BLACM) drive. In the conventional sinusoidal drive method, either an encoder or a resolver is usually employed. For position sensor-less vector control schemes, the rotor flux estimation and torque components are obtained by complicated coordinate transformations. These computational intensive methods are susceptible to current distortions and parameter variations. To simplify the method complexity, this work presents a sensor-less pseudo-sinusoidal drive scheme with speed control for a three-phase BLACM. Based on the sinusoidal drive scheme, a floating period of each phase current is inserted for back electromotive force detection. The zero-crossing point is determined directly by the proposed scheme, and the rotor magnetic position and rotor speed can be estimated simultaneously. Several experiments for various active angle periods are undertaken. Furthermore, a current feedback control is included to minimize and compensate the torque fluctuation. The experimental results show that the proposed method has a competitive performance compared with the conventional drive manners for BLACM. The proposed scheme is straightforward, bringing the benefits of sensor-less drive and negating the need for coordinate transformations in the operating process.

  20. A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision

    NASA Astrophysics Data System (ADS)

    Tsai, Yuan-Yu

    2016-03-01

    Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.

  1. Tachyon field in loop quantum cosmology: An example of traversable singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Lifang; Zhu Jianyang

    2009-06-15

    Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universe through a bounce in the high energy region. But LQC has an ambiguity about the quantization scheme. Recently, the authors in [Phys. Rev. D 77, 124008 (2008)] proposed a new quantization scheme. Similar to others, this new quantization scheme also replaces the big bang singularity with the quantum bounce. More interestingly, it introduces a quantum singularity, which is traversable. We investigate this novel dynamics quantitatively with a tachyon scalar field, which gives us a concrete example. Our result shows that our universe can evolve through the quantum singularity regularly,more » which is different from the classical big bang singularity. So this singularity is only a weak singularity.« less

  2. Optical Implementation of the Optimal Universal and Phase-Covariant Quantum Cloning Machines

    NASA Astrophysics Data System (ADS)

    Ye, Liu; Song, Xue-Ke; Yang, Jie; Yang, Qun; Ma, Yang-Cheng

    Quantum cloning relates to the security of quantum computation and quantum communication. In this paper, firstly we propose a feasible unified scheme to implement optimal 1 → 2 universal, 1 → 2 asymmetric and symmetric phase-covariant cloning, and 1 → 2 economical phase-covariant quantum cloning machines only via a beam splitter. Then 1 → 3 economical phase-covariant quantum cloning machines also can be realized by adding another beam splitter in context of linear optics. The scheme is based on the interference of two photons on a beam splitter with different splitting ratios for vertical and horizontal polarization components. It is shown that under certain condition, the scheme is feasible by current experimental technology.

  3. The Perception of Generic Capabilities and Learning Environment among Undergraduate Nursing Students after the Implementation of a Senior Intake Scheme

    ERIC Educational Resources Information Center

    Chan, Carmen W. H.; Leung, Doris Y. P.; Lee, Diana T. F.; Chair, Sek Ying; Ip, Wan Yim; Sit, Janet W. H.

    2018-01-01

    Hong Kong has introduced a senior intake admission scheme which is similar to the US model of credit transfer from community college programmes to university bachelor programmes. The study aimed to assess the outcomes, in terms of generic capabilities, of introducing a senior intake articulation scheme to a bachelor of nursing curriculum in Hong…

  4. 75 FR 20596 - Notice of Debarment; Schools and Libraries Universal Service Support Mechanism

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-20

    ... participation in a scheme to defraud the E-Rate program.\\6\\ You held yourself out as an E-Rate consultant and salesperson and admitted that you and others devised a scheme to defraud school districts and the E-Rate... your companies.\\7\\ You were also ordered to pay $238,609 in restitution for your role in the scheme.\\8...

  5. Binary counting with chemical reactions.

    PubMed

    Kharam, Aleksandra; Jiang, Hua; Riedel, Marc; Parhi, Keshab

    2011-01-01

    This paper describes a scheme for implementing a binary counter with chemical reactions. The value of the counter is encoded by logical values of "0" and "1" that correspond to the absence and presence of specific molecular types, respectively. It is incremented when molecules of a trigger type are injected. Synchronization is achieved with reactions that produce a sustained three-phase oscillation. This oscillation plays a role analogous to a clock signal in digital electronics. Quantities are transferred between molecular types in different phases of the oscillation. Unlike all previous schemes for chemical computation, this scheme is dependent only on coarse rate categories for the reactions ("fast" and "slow"). Given such categories, the computation is exact and independent of the specific reaction rates. Although conceptual for the time being, the methodology has potential applications in domains of synthetic biology such as biochemical sensing and drug delivery. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.

  6. Correlation between diffusion kurtosis and NODDI metrics in neonates and young children

    NASA Astrophysics Data System (ADS)

    Ahmed, Shaheen; Wang, Zhiyue J.; Chia, Jonathan M.; Rollins, Nancy K.

    2016-03-01

    Diffusion Tensor Imaging (DTI) uses single shell gradient encoding scheme for studying brain tissue diffusion. NODDI (Neurite Orientation Dispersion and Density Imaging) incorporates a gradient scheme with multiple b-values which is used to characterize neurite density and coherence of neuron fiber orientations. Similarly, the diffusion kurtosis imaging also uses a multiple shell scheme to quantify non-Gaussian diffusion but does not assume a tissue model like NODDI. In this study we investigate the connection between metrics derived by NODDI and DKI in children with ages from 46 weeks to 6 years. We correlate the NODDI metrics and Kurtosis measures from the same ROIs in multiple brain regions. We compare the range of these metrics between neonates (46 - 47 weeks), infants (2 -10 months) and young children (2 - 6 years). We find that there exists strong correlation between neurite density vs. mean kurtosis, orientation dispersion vs. kurtosis fractional anisotropy (FA) in pediatric brain imaging.

  7. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  8. Threshold secret sharing scheme based on phase-shifting interferometry.

    PubMed

    Deng, Xiaopeng; Shi, Zhengang; Wen, Wei

    2016-11-01

    We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.

  9. A comparative study of SAR data compression schemes

    NASA Technical Reports Server (NTRS)

    Lambert-Nebout, C.; Besson, O.; Massonnet, D.; Rogron, B.

    1994-01-01

    The amount of data collected from spaceborne remote sensing has substantially increased in the last years. During same time period, the ability to store or transmit data has not increased as quickly. At this time, there is a growing interest in developing compression schemes that could provide both higher compression ratios and lower encoding/decoding errors. In the case of the spaceborne Synthetic Aperture Radar (SAR) earth observation system developed by the French Space Agency (CNES), the volume of data to be processed will exceed both the on-board storage capacities and the telecommunication link. The objective of this paper is twofold: to present various compression schemes adapted to SAR data; and to define a set of evaluation criteria and compare the algorithms on SAR data. In this paper, we review two classical methods of SAR data compression and propose novel approaches based on Fourier Transforms and spectrum coding.

  10. Priority-based methods for reducing the impact of packet loss on HEVC encoded video streams

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2013-02-01

    The rapid growth in the use of video streaming over IP networks has outstripped the rate at which new network infrastructure has been deployed. These bandwidth-hungry applications now comprise a significant part of all Internet traffic and present major challenges for network service providers. The situation is more acute in mobile networks where the available bandwidth is often limited. Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently on track for completion in 2013. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC) for the same quality. However, there has been very little published research on HEVC streaming or the challenges of delivering HEVC streams in resource-constrained network environments. In this paper we consider the problem of adapting an HEVC encoded video stream to meet the bandwidth limitation in a mobile networks environment. Video sequences were encoded using the Test Model under Consideration (TMuC HM6) for HEVC. Network abstraction layers (NAL) units were packetized, on a one NAL unit per RTP packet basis, and transmitted over a realistic hybrid wired/wireless testbed configured with dynamically changing network path conditions and multiple independent network paths from the streamer to the client. Two different schemes for the prioritisation of RTP packets, based on the NAL units they contain, have been implemented and empirically compared using a range of video sequences, encoder configurations, bandwidths and network topologies. In the first prioritisation method the importance of an RTP packet was determined by the type of picture and the temporal switching point information carried in the NAL unit header. Packets containing parameter set NAL units and video coding layer (VCL) NAL units of the instantaneous decoder refresh (IDR) and the clean random access (CRA) pictures were given the highest priority followed by NAL units containing pictures used as reference pictures from which others can be predicted. The second method assigned a priority to each NAL unit based on the rate-distortion cost of the VCL coding units contained in the NAL unit. The sum of the rate-distortion costs of each coding unit contained in a NAL unit was used as the priority weighting. The preliminary results of extensive experiments have shown that all three schemes offered an improvement in PSNR, when comparing original and decoded received streams, over uncontrolled packet loss. Using the first method consistently delivered a significant average improvement of 0.97dB over the uncontrolled scenario while the second method provided a measurable, but less consistent, improvement across the range of testing conditions and encoder configurations.

  11. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  12. Advances in Optical Fiber-Based Faraday Rotation Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, A D; McHale, G B; Goerz, D A

    2009-07-27

    In the past two years, we have used optical fiber-based Faraday Rotation Diagnostics (FRDs) to measure pulsed currents on several dozen capacitively driven and explosively driven pulsed power experiments. We have made simplifications to the necessary hardware for quadrature-encoded polarization analysis, including development of an all-fiber analysis scheme. We have developed a numerical model that is useful for predicting and quantifying deviations from the ideal diagnostic response. We have developed a method of analyzing quadrature-encoded FRD data that is simple to perform and offers numerous advantages over several existing methods. When comparison has been possible, we have seen good agreementmore » with our FRDs and other current sensors.« less

  13. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-01-01

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  14. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.

    PubMed

    Marvian, Milad; Lidar, Daniel A

    2017-01-20

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  15. Single-photon continuous-variable quantum key distribution based on the energy-time uncertainty relation.

    PubMed

    Qi, Bing

    2006-09-15

    We propose a new quantum key distribution protocol in which information is encoded on continuous variables of a single photon. In this protocol, Alice randomly encodes her information on either the central frequency of a narrowband single-photon pulse or the time delay of a broadband single-photon pulse, while Bob randomly chooses to do either frequency measurement or time measurement. The security of this protocol rests on the energy-time uncertainty relation, which prevents Eve from simultaneously determining both frequency and time information with arbitrarily high resolution. Since no interferometer is employed in this scheme, it is more robust against various channel noises, such as polarization and phase fluctuations.

  16. Scalable boson sampling with time-bin encoding using a loop-based architecture.

    PubMed

    Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P

    2014-09-19

    We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.

  17. [The virtual university in medicine. Context, concepts, specifications, users' manual].

    PubMed

    Duvauferrier, R; Séka, L P; Rolland, Y; Rambeau, M; Le Beux, P; Morcet, N

    1998-09-01

    The widespread use of Web servers, with the emergence of interactive functions and the possibility of credit card payment via Internet, together with the requirement for continuing education and the subsequent need for a computer to link into the health care network have incited the development of a virtual university scheme on Internet. The Virtual University of Radiology is not only a computer-assisted teaching tool with a set of attractive features, but also a powerful engine allowing the organization, distribution and control of medical knowledge available in the www.server. The scheme provides patient access to general information, a secretary's office for enrollment and the Virtual University itself, with its library, image database, a forum for subspecialties and clinical case reports, an evaluation module and various guides and help tools for diagnosis, prescription and indexing. Currently the Virtual University of Radiology offers diagnostic imaging, but can also be used by other specialties and for general practice.

  18. A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet

    ERIC Educational Resources Information Center

    Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.

    2006-01-01

    Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…

  19. Teleportation with insurance of an entangled atomic state via cavity decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chimczak, Grzegorz; Tanas, Ryszard; Miranowicz, Adam

    2005-03-01

    We propose a scheme to teleport an entangled state of two {lambda}-type three-level atoms via photons. The teleportation protocol involves the local redundant encoding protecting the initial entangled state and allowing for repeating the detection until quantum information transfer is successful. We also show how to manipulate a state of many {lambda}-type atoms trapped in a cavity.

  20. Higher-Order Motion-Compensation for In Vivo Cardiac Diffusion Tensor Imaging in Rats

    PubMed Central

    Welsh, Christopher L.; DiBella, Edward V. R.; Hsu, Edward W.

    2015-01-01

    Motion of the heart has complicated in vivo applications of cardiac diffusion MRI and diffusion tensor imaging (DTI), especially in small animals such as rats where ultra-high-performance gradient sets are currently not available. Even with velocity compensation via, for example, bipolar encoding pulses, the variable shot-to-shot residual motion-induced spin phase can still give rise to pronounced artifacts. This study presents diffusion-encoding schemes that are designed to compensate for higher-order motion components, including acceleration and jerk, which also have the desirable practical features of minimal TEs and high achievable b-values. The effectiveness of these schemes was verified numerically on a realistic beating heart phantom, and demonstrated empirically with in vivo cardiac diffusion MRI in rats. Compensation for acceleration, and lower motion components, was found to be both necessary and sufficient for obtaining diffusion-weighted images of acceptable quality and SNR, which yielded the first in vivo cardiac DTI demonstrated in the rat. These findings suggest that compensation for higher order motion, particularly acceleration, can be an effective alternative solution to high-performance gradient hardware for improving in vivo cardiac DTI. PMID:25775486

  1. Design of frequency-encoded data-based optical master-slave-JK flip-flop using polarization switch

    NASA Astrophysics Data System (ADS)

    Mandal, Sumana; Mandal, Dhoumendra; Mandal, Mrinal Kanti; Garai, Sisir Kumar

    2017-06-01

    An optical data processing and communication system provides enormous potential bandwidth and a very high processing speed, and it can fulfill the demands of the present generation. For an optical computing system, several data processing units that work in the optical domain are essential. Memory elements are undoubtedly essential to storing any information. Optical flip-flops can store one bit of optical information. From these flip-flop registers, counters can be developed. Here, the authors proposed an optical master-slave (MS)-JK flip-flop with the help of two-input and three-input optical NAND gates. Optical NAND gates have been developed using semiconductor optical amplifiers (SOAs). The nonlinear polarization switching property of an SOA has been exploited here, and it acts as a polarization switch in the proposed scheme. A frequency encoding technique is adopted for representing data. A specific frequency of an optical signal represents a binary data bit. This technique of data representation is helpful because frequency is the fundamental property of a signal, and it remains unaltered during reflection, refraction, absorption, etc. throughout the data propagation. The simulated results enhance the admissibility of the scheme.

  2. Impact of MPEG-4 3D mesh coding on watermarking algorithms for polygonal 3D meshes

    NASA Astrophysics Data System (ADS)

    Funk, Wolfgang

    2004-06-01

    The MPEG-4 multimedia standard addresses the scene-based composition of audiovisual objects. Natural and synthetic multimedia content can be mixed and transmitted over narrow and broadband communication channels. Synthetic natural hybrid coding (SNHC) within MPEG-4 provides tools for 3D mesh coding (3DMC). We investigate the robustness of two different 3D watermarking algorithms for polygonal meshes with respect to 3DMC. The first algorithm is a blind detection scheme designed for labelling applications that require high bandwidth and low robustness. The second algorithm is a robust non-blind one-bit watermarking scheme intended for copyright protection applications. Both algorithms have been proposed by Benedens. We expect 3DMC to have an impact on the watermarked 3D meshes, as the algorithms used for our simulations work on vertex coordinates to encode the watermark. We use the 3DMC implementation provided with the MPEG-4 reference software and the Princeton Shape Benchmark model database for our simulations. The watermarked models are sent through the 3DMC encoder and decoder, and the watermark decoding process is performed. For each algorithm under consideration we examine the detection properties as a function of the quantization of the vertex coordinates.

  3. Multi-dimensional quantum state sharing based on quantum Fourier transform

    NASA Astrophysics Data System (ADS)

    Qin, Huawang; Tso, Raylin; Dai, Yuewei

    2018-03-01

    A scheme of multi-dimensional quantum state sharing is proposed. The dealer performs the quantum SUM gate and the quantum Fourier transform to encode a multi-dimensional quantum state into an entanglement state. Then the dealer distributes each participant a particle of the entanglement state, to share the quantum state among n participants. In the recovery, n-1 participants measure their particles and supply their measurement results; the last participant performs the unitary operation on his particle according to these measurement results and can reconstruct the initial quantum state. The proposed scheme has two merits: It can share the multi-dimensional quantum state and it does not need the entanglement measurement.

  4. A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud

    PubMed Central

    Seenivasagam, V.; Velumani, R.

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)—Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943

  5. A QR code based zero-watermarking scheme for authentication of medical images in teleradiology cloud.

    PubMed

    Seenivasagam, V; Velumani, R

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.

  6. On the Impact of School Teacher Fellows in Chemistry Departments within UK Higher Education Institutes, from 2005-2013

    ERIC Educational Resources Information Center

    Shallcross, Dudley E.; Harrison, Timothy G.; Read, David; Barker, Nicholas

    2014-01-01

    Two UK programmes to place school teachers in a university setting are compared; the Excellence Fellowship Awards Pilot Scheme and the School Teacher Fellows Scheme. In this paper we compare the School Teacher Fellow Scheme supported by Bristol ChemLabS (Shallcross et al., 2013a, 2013b) and subsequently by the Royal Society of Chemistry with the…

  7. Institutional design and organizational practice for universal coverage in lesser-developed countries: challenges facing the Lao PDR.

    PubMed

    Ahmed, Shakil; Annear, Peter Leslie; Phonvisay, Bouaphat; Phommavong, Chansaly; Cruz, Valeria de Oliveira; Hammerich, Asmus; Jacobs, Bart

    2013-11-01

    There is now widespread acceptance of the universal coverage approach, presented in the 2010 World Health Report. There are more and more voices for the benefit of creating a single national risk pool. Now, a body of literature is emerging on institutional design and organizational practice for universal coverage, related to management of the three health-financing functions: collection, pooling and purchasing. While all countries can move towards universal coverage, lower-income countries face particular challenges, including scarce resources and limited capacity. Recently, the Lao PDR has been preparing options for moving to a single national health insurance scheme. The aim is to combine four different social health protection schemes into a national health insurance authority (NHIA) with a single national fund- and risk-pool. This paper investigates the main institutional and organizational challenges related to the creation of the NHIA. The paper uses a qualitative approach, drawing on the World Health Organization's institutional and Organizational Assessment for Improving and Strengthening health financing (OASIS) conceptual framework for data analysis. Data were collected from a review of key health financing policy documents and from 17 semi-structured key informant interviews. Policy makers and advisors are confronting issues related to institutional arrangements, funding sources for the authority and government support for subsidies to the demand-side health financing schemes. Compulsory membership is proposed, but the means for covering the informal sector have not been resolved. While unification of existing schemes may be the basis for creating a single risk pool, challenges related to administrative capacity and cross-subsidies remain. The example of Lao PDR illustrates the need to include consideration of national context, the sequencing of reforms and the time-scale appropriate for achieving universal coverage. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Introducing a buddying scheme for first year pre-registration students.

    PubMed

    Campbell, Anne

    Student buddying schemes have been found to be helpful for a variety of different university students. This article describes a scheme where first year pre-registration child nursing students are buddied with second-year students, which was first initiated in the academic year 2012/2013. The first year students were aware that peer support was available but contact was only maintained by a minority of students. At present it is uncertain what impact the scheme has had on attrition figures, particularly in the first year. Initial evaluation indicates that students found the scheme helpful and would like it to continue to be available to first-year students.

  9. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases.

    PubMed

    Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland

    2015-05-12

    According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.

  10. Fast and memory efficient text image compression with JBIG2.

    PubMed

    Ye, Yan; Cosman, Pamela

    2003-01-01

    In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.

  11. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  12. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  13. New architecture for dynamic frame-skipping transcoder.

    PubMed

    Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi

    2002-01-01

    Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.

  14. An Electronic Finding Aid Using Extensible Markup Language (XML) and Encoded Archival Description (EAD).

    ERIC Educational Resources Information Center

    Chang, May

    2000-01-01

    Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…

  15. News Competition: Physics Olympiad hits Thailand Report: Institute carries out survey into maths in physics at university Event: A day for everyone teaching physics Conference: Welsh conference celebrates birthday Schools: Researchers in Residence scheme set to close Teachers: A day for new physics teachers Social: Network combines fun and physics Forthcoming events

    NASA Astrophysics Data System (ADS)

    2011-09-01

    Competition: Physics Olympiad hits Thailand Report: Institute carries out survey into maths in physics at university Event: A day for everyone teaching physics Conference: Welsh conference celebrates birthday Schools: Researchers in Residence scheme set to close Teachers: A day for new physics teachers Social: Network combines fun and physics Forthcoming events

  16. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    DOE PAGES

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2015-10-01

    Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less

  17. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less

  18. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    NASA Astrophysics Data System (ADS)

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2015-11-01

    Following an earlier derivation by Catani, de Florian and Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. We further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are consistent with each other and with that of the standard CSS formalism.

  19. Wavelength-multiplexed fiber optic position encoder for aircraft control systems

    NASA Astrophysics Data System (ADS)

    Beheim, Glenn; Krasowski, Michael J.; Sotomayor, Jorge L.; Fritsch, Klaus; Flatico, Joseph M.; Bathurst, Richard L.; Eustace, John G.; Anthan, Donald J.

    1991-02-01

    NASA Lewis together with John Carroll University has worked for the last several years to develop wavelength-multiplexed digital position transducers for use in aircraft control systems. A prototype rotary encoder is being built for a demonstration program involving the control of a commercial transport''s turbofan engine. This encoder has eight bits of resolution a 90 degree range and is powered by a single LED. A compact electro-optics module is being developed to withstand the extremely hostile gas turbine environment.

  20. Implementation of single-photon quantum routing and decoupling using a nitrogen-vacancy center and a whispering-gallery-mode resonator-waveguide system.

    PubMed

    Cao, Cong; Duan, Yu-Wen; Chen, Xi; Zhang, Ru; Wang, Tie-Jun; Wang, Chuan

    2017-07-24

    Quantum router is a key element needed for the construction of future complex quantum networks. However, quantum routing with photons, and its inverse, quantum decoupling, are difficult to implement as photons do not interact, or interact very weakly in nonlinear media. In this paper, we investigate the possibility of implementing photonic quantum routing based on effects in cavity quantum electrodynamics, and present a scheme for single-photon quantum routing controlled by the other photon using a hybrid system consisting of a single nitrogen-vacancy (NV) center coupled with a whispering-gallery-mode resonator-waveguide structure. Different from the cases in which classical information is used to control the path of quantum signals, both the control and signal photons are quantum in our implementation. Compared with the probabilistic quantum routing protocols based on linear optics, our scheme is deterministic and also scalable to multiple photons. We also present a scheme for single-photon quantum decoupling from an initial state with polarization and spatial-mode encoding, which can implement an inverse operation to the quantum routing. We discuss the feasibility of our schemes by considering current or near-future techniques, and show that both the schemes can operate effectively in the bad-cavity regime. We believe that the schemes could be key building blocks for future complex quantum networks and large-scale quantum information processing.

  1. Principles of metadata organization at the ENCODE data coordination center.

    PubMed

    Hong, Eurie L; Sloan, Cricket A; Chan, Esther T; Davidson, Jean M; Malladi, Venkat S; Strattan, J Seth; Hitz, Benjamin C; Gabdank, Idan; Narayanan, Aditi K; Ho, Marcus; Lee, Brian T; Rowe, Laurence D; Dreszer, Timothy R; Roe, Greg R; Podduturi, Nikhil R; Tanaka, Forrest; Hilton, Jason A; Cherry, J Michael

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org. © The Author(s) 2016. Published by Oxford University Press.

  2. No-Fault Compensation for Adverse Events Following Immunization: A Review of Chinese Law And Practice.

    PubMed

    Fei, Lanfang; Peng, Zhou

    2017-02-01

    In 2005, China introduced an administrative no-fault one-time compensation scheme for adverse events following immunization (AEFI). The scheme aims to ensure fair compensation for those injured by adverse reactions following immunization. These individuals bear a significant burden for the benefits of widespread immunization. However, there is little empirical evidence of how the scheme has been implemented and how it functions in practice. The article aims to fill this gap. Based on an analysis of the legal basis of the scheme and of practical compensation cases, this article examines the structuring, function, and effects of the scheme; evaluates loopholes in the scheme; evaluates the extent to which the scheme has achieved its intended objectives; and discusses further development of the scheme. © The Author 2017. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Higher Education Quality Assessment in China: An Impact Study

    ERIC Educational Resources Information Center

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  4. New Course Design: Classification Schemes and Information Architecture.

    ERIC Educational Resources Information Center

    Weinberg, Bella Hass

    2002-01-01

    Describes a course developed at St. John's University (New York) in the Division of Library and Information Science that relates traditional classification schemes to information architecture and Web sites. Highlights include functional aspects of information architecture, that is, the way content is structured; assignments; student reactions; and…

  5. Quantum Walk Schemes for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Underwood, Michael S.

    Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.

  6. Performance study of large area encoding readout MRPC

    NASA Astrophysics Data System (ADS)

    Chen, X. L.; Wang, Y.; Chen, G.; Han, D.; Wang, X.; Zeng, M.; Zeng, Z.; Zhao, Z.; Guo, B.

    2018-02-01

    Muon tomography system built by the 2-D readout high spatial resolution Multi-gap Resistive Plate Chamber (MRPC) detector is a project of Tsinghua University. An encoding readout method based on the fine-fine configuration has been used to minimize the number of the readout electronic channels resulting in reducing the complexity and the cost of the system. In this paper, we provide a systematic comparison of the MRPC detector performance with and without fine-fine encoding readout. Our results suggest that the application of the fine-fine encoding readout leads us to achieve a detecting system with slightly worse spatial resolution but dramatically reduce the number of electronic channels.

  7. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    DTIC Science & Technology

    2017-05-08

    hybrid photonic torus, the all-optical Corona crossbar, and the hybrid hierarchical Firefly crossbar. • The key challenges for waveguide photonics...improves SXR but with relatively higher EDP overhead. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and...photonic crossbar architectures ( Corona and Firefly) indicate that our approach improves worst-case signal-to-noise ratio (SNR) by up to 51.7

  8. University Admissions. Policy Note. Number 3

    ERIC Educational Resources Information Center

    Group of Eight (NJ1), 2012

    2012-01-01

    University admissions, like many other aspects of the higher education sector, are going through a time of significant change. From 2012, universities will receive full funding under the Commonwealth Grants Scheme (CGS) for as many places as they offer. Previously, the Government limited the number of funded places, with a tolerance band for…

  9. High Order Schemes in BATS-R-US: Is it OK to Simplify Them?

    NASA Astrophysics Data System (ADS)

    Tóth, G.; Chen, Y.; van der Holst, B.; Daldorff, L. K. S.

    2014-09-01

    We describe a number of high order schemes and their simplified variants that have been implemented into the University of Michigan global magnetohydrodynamics code BATS-R-US. We compare the various schemes with each other and the legacy 2nd order TVD scheme for various test problems and two space physics applications. We find that the simplified schemes are often quite competitive with the more complex and expensive full versions, despite the fact that the simplified versions are only high order accurate for linear systems of equations. We find that all the high order schemes require some fixes to ensure positivity in the space physics applications. On the other hand, they produce superior results as compared with the second order scheme and/or produce the same quality of solution at a much reduced computational cost.

  10. The theta/gamma discrete phase code occuring during the hippocampal phase precession may be a more general brain coding scheme.

    PubMed

    Lisman, John

    2005-01-01

    In the hippocampus, oscillations in the theta and gamma frequency range occur together and interact in several ways, indicating that they are part of a common functional system. It is argued that these oscillations form a coding scheme that is used in the hippocampus to organize the readout from long-term memory of the discrete sequence of upcoming places, as cued by current position. This readout of place cells has been analyzed in several ways. First, plots of the theta phase of spikes vs. position on a track show a systematic progression of phase as rats run through a place field. This is termed the phase precession. Second, two cells with nearby place fields have a systematic difference in phase, as indicated by a cross-correlation having a peak with a temporal offset that is a significant fraction of a theta cycle. Third, several different decoding algorithms demonstrate the information content of theta phase in predicting the animal's position. It appears that small phase differences corresponding to jitter within a gamma cycle do not carry information. This evidence, together with the finding that principle cells fire preferentially at a given gamma phase, supports the concept of theta/gamma coding: a given place is encoded by the spatial pattern of neurons that fire in a given gamma cycle (the exact timing within a gamma cycle being unimportant); sequential places are encoded in sequential gamma subcycles of the theta cycle (i.e., with different discrete theta phase). It appears that this general form of coding is not restricted to readout of information from long-term memory in the hippocampus because similar patterns of theta/gamma oscillations have been observed in multiple brain regions, including regions involved in working memory and sensory integration. It is suggested that dual oscillations serve a general function: the encoding of multiple units of information (items) in a way that preserves their serial order. The relationship of such coding to that proposed by Singer and von der Malsburg is discussed; in their scheme, theta is not considered. It is argued that what theta provides is the absolute phase reference needed for encoding order. Theta/gamma coding therefore bears some relationship to the concept of "word" in digital computers, with word length corresponding to the number of gamma cycles within a theta cycle, and discrete phase corresponding to the ordered "place" within a word. Copyright 2005 Wiley-Liss, Inc.

  11. Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Bing; Evans, Philip G.; Grice, Warren P.

    In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less

  12. Passive state preparation in the Gaussian-modulated coherent-states quantum key distribution

    DOE PAGES

    Qi, Bing; Evans, Philip G.; Grice, Warren P.

    2018-01-01

    In the Gaussian-modulated coherent-states (GMCS) quantum key distribution (QKD) protocol, Alice prepares quantum states actively: For each transmission, Alice generates a pair of Gaussian-distributed random numbers, encodes them on a weak coherent pulse using optical amplitude and phase modulators, and then transmits the Gaussian-modulated weak coherent pulse to Bob. Here we propose a passive state preparation scheme using a thermal source. In our scheme, Alice splits the output of a thermal source into two spatial modes using a beam splitter. She measures one mode locally using conjugate optical homodyne detectors, and transmits the other mode to Bob after applying appropriatemore » optical attenuation. Under normal conditions, Alice's measurement results are correlated to Bob's, and they can work out a secure key, as in the active state preparation scheme. Given the initial thermal state generated by the source is strong enough, this scheme can tolerate high detector noise at Alice's side. Furthermore, the output of the source does not need to be single mode, since an optical homodyne detector can selectively measure a single mode determined by the local oscillator. Preliminary experimental results suggest that the proposed scheme could be implemented using an off-the-shelf amplified spontaneous emission source.« less

  13. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  14. A Coding Scheme to Analyse the Online Asynchronous Discussion Forums of University Students

    ERIC Educational Resources Information Center

    Biasutti, Michele

    2017-01-01

    The current study describes the development of a content analysis coding scheme to examine transcripts of online asynchronous discussion groups in higher education. The theoretical framework comprises the theories regarding knowledge construction in computer-supported collaborative learning (CSCL) based on a sociocultural perspective. The coding…

  15. Experimental evaluation of fingerprint verification system based on double random phase encoding

    NASA Astrophysics Data System (ADS)

    Suzuki, Hiroyuki; Yamaguchi, Masahiro; Yachida, Masuyoshi; Ohyama, Nagaaki; Tashima, Hideaki; Obi, Takashi

    2006-03-01

    We proposed a smart card holder authentication system that combines fingerprint verification with PIN verification by applying a double random phase encoding scheme. In this system, the probability of accurate verification of an authorized individual reduces when the fingerprint is shifted significantly. In this paper, a review of the proposed system is presented and preprocessing for improving the false rejection rate is proposed. In the proposed method, the position difference between two fingerprint images is estimated by using an optimized template for core detection. When the estimated difference exceeds the permissible level, the user inputs the fingerprint again. The effectiveness of the proposed method is confirmed by a computational experiment; its results show that the false rejection rate is improved.

  16. New encoded single-indicator sequences based on physico-chemical parameters for efficient exon identification.

    PubMed

    Meher, J K; Meher, P K; Dash, G N; Raval, M K

    2012-01-01

    The first step in gene identification problem based on genomic signal processing is to convert character strings into numerical sequences. These numerical sequences are then analysed spectrally or using digital filtering techniques for the period-3 peaks, which are present in exons (coding areas) and absent in introns (non-coding areas). In this paper, we have shown that single-indicator sequences can be generated by encoding schemes based on physico-chemical properties. Two new methods are proposed for generating single-indicator sequences based on hydration energy and dipole moments. The proposed methods produce high peak at exon locations and effectively suppress false exons (intron regions having greater peak than exon regions) resulting in high discriminating factor, sensitivity and specificity.

  17. Resilience of hybrid optical angular momentum qubits to turbulence

    PubMed Central

    Farías, Osvaldo Jiménez; D'Ambrosio, Vincenzo; Taballione, Caterina; Bisesto, Fabrizio; Slussarenko, Sergei; Aolita, Leandro; Marrucci, Lorenzo; Walborn, Stephen P.; Sciarrino, Fabio

    2015-01-01

    Recent schemes to encode quantum information into the total angular momentum of light, defining rotation-invariant hybrid qubits composed of the polarization and orbital angular momentum degrees of freedom, present interesting applications for quantum information technology. However, there remains the question as to how detrimental effects such as random spatial perturbations affect these encodings. Here, we demonstrate that alignment-free quantum communication through a turbulent channel based on hybrid qubits can be achieved with unit transmission fidelity. In our experiment, alignment-free qubits are produced with q-plates and sent through a homemade turbulence chamber. The decoding procedure, also realized with q-plates, relies on both degrees of freedom and renders an intrinsic error-filtering mechanism that maps errors into losses. PMID:25672667

  18. One-sided measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Cao, Wen-Fei; Zhen, Yi-Zheng; Zheng, Yu-Lin; Li, Li; Chen, Zeng-Bing; Liu, Nai-Le; Chen, Kai

    2018-01-01

    Measurement-device-independent quantum key distribution (MDI-QKD) protocol was proposed to remove all the detector side channel attacks, while its security relies on the trusted encoding systems. Here we propose a one-sided MDI-QKD (1SMDI-QKD) protocol, which enjoys detection loophole-free advantage, and at the same time weakens the state preparation assumption in MDI-QKD. The 1SMDI-QKD can be regarded as a modified MDI-QKD, in which Bob's encoding system is trusted, while Alice's is uncharacterized. For the practical implementation, we also provide a scheme by utilizing coherent light source with an analytical two decoy state estimation method. Simulation with realistic experimental parameters shows that the protocol has a promising performance, and thus can be applied to practical QKD applications.

  19. Generalized optical angular momentum sorter and its application to high-dimensional quantum cryptography.

    PubMed

    Larocque, Hugo; Gagnon-Bischoff, Jérémie; Mortimer, Dominic; Zhang, Yingwen; Bouchard, Frédéric; Upham, Jeremy; Grillo, Vincenzo; Boyd, Robert W; Karimi, Ebrahim

    2017-08-21

    The orbital angular momentum (OAM) carried by optical beams is a useful quantity for encoding information. This form of encoding has been incorporated into various works ranging from telecommunications to quantum cryptography, most of which require methods that can rapidly process the OAM content of a beam. Among current state-of-the-art schemes that can readily acquire this information are so-called OAM sorters, which consist of devices that spatially separate the OAM components of a beam. Such devices have found numerous applications in optical communications, a field that is in constant demand for additional degrees of freedom, such as polarization and wavelength, into which information can also be encoded. Here, we report the implementation of a device capable of sorting a beam based on its OAM and polarization content, which could be of use in works employing both of these degrees of freedom as information channels. After characterizing our fabricated device, we demonstrate how it can be used for quantum communications via a quantum key distribution protocol.

  20. Combining Fourier phase encoding and broadband inversion toward J-edited spectra

    NASA Astrophysics Data System (ADS)

    Lin, Yulan; Guan, Quanshuai; Su, Jianwei; Chen, Zhong

    2018-06-01

    Nuclear magnetic resonance (NMR) spectra are often utilized for gathering accurate information relevant to molecular structures and composition assignments. In this study, we develop a homonuclear encoding approach based on imparting a discrete phase modulation of the targeted cross peaks, and combine it with a pure shift experiments (PSYCHE) based J-modulated scheme, providing simple 2D J-edited spectra for accurate measurement of scalar coupling networks. Chemical shifts and J coupling constants of protons coupled to the specific protons are demonstrated along the F2 and F1 dimensions, respectively. Polychromatic pulses by Fourier phase encoding were performed to simultaneously detect several coupling networks. Proton-proton scalar couplings are chosen by a polychromatic pulse and a PSYCHE element. Axis peaks and unwanted couplings are complete eradicated by incorporating a selective COSY block as a preparation period. The theoretical principles and the signal processing procedure are laid out, and experimental observations are rationalized on the basis of theoretical analyses.

  1. Quantum information to the home

    NASA Astrophysics Data System (ADS)

    Choi, Iris; Young, Robert J.; Townsend, Paul D.

    2011-06-01

    Information encoded on individual quanta will play an important role in our future lives, much as classically encoded digital information does today. Combining quantum information carried by single photons with classical signals encoded on strong laser pulses in modern fibre-to-the-home (FTTH) networks is a significant challenge, the solution to which will facilitate the global distribution of quantum information to the home and with it a quantum internet [1]. In real-world networks, spontaneous Raman scattering in the optical fibre would induce crosstalk between the high-power classical channels and a single-photon quantum channel, such that the latter is unable to operate. Here, we show that the integration of quantum and classical information on an FTTH network is possible by performing quantum key distribution (QKD) on a network while simultaneously transferring realistic levels of classical data. Our novel scheme involves synchronously interleaving a channel of quantum data with the Raman scattered photons from a classical channel, exploiting the periodic minima in the instantaneous crosstalk and thereby enabling secure QKD to be performed.

  2. Graph State-Based Quantum Secret Sharing with the Chinese Remainder Theorem

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Luo, Peng; Wang, Yijun

    2016-11-01

    Quantum secret sharing (QSS) is a significant quantum cryptography technology in the literature. Dividing an initial secret into several sub-secrets which are then transferred to other legal participants so that it can be securely recovered in a collaboration fashion. In this paper, we develop a quantum route selection based on the encoded quantum graph state, thus enabling the practical QSS scheme in the small-scale complex quantum network. Legal participants are conveniently designated with the quantum route selection using the entanglement of the encoded graph states. Each participant holds a vertex of the graph state so that legal participants are selected through performing operations on specific vertices. The Chinese remainder theorem (CRT) strengthens the security of the recovering process of the initial secret among the legal participants. The security is ensured by the entanglement of the encoded graph states that are cooperatively prepared and shared by legal users beforehand with the sub-secrets embedded in the CRT over finite fields.

  3. Decoding of quantum dots encoded microbeads using a hyperspectral fluorescence imaging method.

    PubMed

    Liu, Yixi; Liu, Le; He, Yonghong; Zhu, Liang; Ma, Hui

    2015-05-19

    We presented a decoding method of quantum dots encoded microbeads with its fluorescence spectra using line scan hyperspectral fluorescence imaging (HFI) method. A HFI method was developed to attain both the spectra of fluorescence signal and the spatial information of the encoded microbeads. A decoding scheme was adopted to decode the spectra of multicolor microbeads acquired by the HFI system. Comparison experiments between the HFI system and the flow cytometer were conducted. The results showed that the HFI system has higher spectrum resolution; thus, more channels in spectral dimension can be used. The HFI system detection and decoding experiment with the single-stranded DNA (ssDNA) immobilized multicolor beads was done, and the result showed the efficiency of the HFI system. Surface modification of the microbeads by use of the polydopamine was characterized by the scanning electron microscopy and ssDNA immobilization was characterized by the laser confocal microscope. These results indicate that the designed HFI system can be applied to practical biological and medical applications.

  4. Combination Base64 Algorithm and EOF Technique for Steganography

    NASA Astrophysics Data System (ADS)

    Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Saleh Ahmar, Ansari; Siregar, Dodi; Putera Utama Siahaan, Andysah; Faisal, Ilham; Rahman, Sayuti; Suita, Diana; Zamsuri, Ahmad; Abdullah, Dahlan; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Sriadhi, S.

    2018-04-01

    The steganography process combines mathematics and computer science. Steganography consists of a set of methods and techniques to embed the data into another media so that the contents are unreadable to anyone who does not have the authority to read these data. The main objective of the use of base64 method is to convert any file in order to achieve privacy. This paper discusses a steganography and encoding method using base64, which is a set of encoding schemes that convert the same binary data to the form of a series of ASCII code. Also, the EoF technique is used to embed encoding text performed by Base64. As an example, for the mechanisms a file is used to represent the texts, and by using the two methods together will increase the security level for protecting the data, this research aims to secure many types of files in a particular media with a good security and not to damage the stored files and coverage media that used.

  5. Determinants of enrollment of informal sector workers in cooperative based health scheme in Bangladesh.

    PubMed

    Sarker, Abdur Razzaque; Sultana, Marufa; Mahumud, Rashidul Alam; Ahmed, Sayem; Islam, Ziaul; Morton, Alec; Khan, Jahangir A M

    2017-01-01

    Providing access to affordable health care for the informal sector remains a considerable challenge for low income countries striving to make progress towards universal health coverage. The objective of the study is to identify the factors shaping the decision to enroll in a cooperative based health scheme for informal workers in Bangladesh and also help to identify the features of informal workers without health schemes and their likelihood of being insured. Data were derived from a cross-sectional in-house survey within the catchment area of a cooperative based health scheme in Bangladesh during April-June 2014, covering a total of 784 households (458 members and 326 non-members). Multivariate logistic regression model was used to identify factors associated with cooperative based health scheme and explanatory variables. This study found that a number of factors were significant determinants of health scheme participation including sex of household head, household composition, occupational category as well as involvement social financial safety net programs. Findings from this study can be suggestive for policy-makers interested in scaling up health insurance for informal workers in Bangladesh. Shared funding from this large informal sector can generate new resources for healthcare, which is in line with the healthcare financing strategy of Bangladesh as well as the recommendation of the World Health Organization for developing social health insurance as part of the path to Universal Health Coverage.

  6. A 2D MTF approach to evaluate and guide dynamic imaging developments.

    PubMed

    Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno

    2010-02-01

    As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.

  7. Texture Mixing via Universal Simulation

    DTIC Science & Technology

    2005-08-01

    classes and universal simulation. Based on the well-known Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one...length that produce the same tree (dictionary) under the Lempel - Ziv (LZ) incre- mental parsing defined in the well-known LZ78 universal compression ...the well known Lempel - Ziv parsing algorithm . The goal is not just to synthesize mixed textures, but to understand what texture is. We are currently

  8. Perceptual video quality comparison of 3DTV broadcasting using multimode service systems

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Lee, Chulhee

    2015-05-01

    Multimode service (MMS) systems allow broadcasters to provide multichannel services using a single HD channel. Using these systems, it is possible to provide 3DTV programs that can be watched either in three-dimensional (3-D) or two-dimensional (2-D) modes with backward compatibility. In the MMS system for 3DTV broadcasting using the Advanced Television Systems Committee standards, the left and the right views are encoded using MPEG-2 and H.264, respectively, and then transmitted using a dual HD streaming format. The left view, encoded using MPEG-2, assures 2-D backward compatibility while the right view, encoded using H.264, can be optionally combined with the left view to generate stereoscopic 3-D views. We analyze 2-D and 3-D perceptual quality when using the MMS system by comparing items in the frame-compatible format (top-bottom), which is a conventional transmission scheme for 3-D broadcasting. We performed perceptual 2-D and 3-D video quality evaluation assuming 3DTV programs are encoded using the MMS system and top-bottom format. The results show that MMS systems can be preferable with regard to perceptual 2-D and 3-D quality and backward compatibility.

  9. A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality

    NASA Astrophysics Data System (ADS)

    Liu, Li; Zhuang, Xinhua

    2009-01-01

    It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.

  10. High reliability outdoor sonar prototype based on efficient signal coding.

    PubMed

    Alvarez, Fernando J; Ureña, Jesús; Mazo, Manuel; Hernández, Alvaro; García, Juan J; de Marziani, Carlos

    2006-10-01

    Many mobile robots and autonomous vehicles designed for outdoor operation have incorporated ultrasonic sensors in their navigation systems, whose function is mainly to avoid possible collisions with very close obstacles. The use of these systems in more precise tasks requires signal encoding and the incorporation of pulse compression techniques that have already been used with success in the design of high-performance indoor sonars. However, the transmission of ultrasonic encoded signals outdoors entails a new challenge because of the effects of atmospheric turbulence. This phenomenon causes random fluctuations in the phase and amplitude of traveling acoustic waves, a fact that can make the encoded signal completely unrecognizable by its matched receiver. Atmospheric turbulence is investigated in this work, with the aim of determining the conditions under which it is possible to assure the reliable outdoor operation of an ultrasonic pulse compression system. As a result of this analysis, a novel sonar prototype based on complementary sequences coding is developed and experimentally tested. This encoding scheme provides the system with very useful additional features, namely, high robustness to noise, multi-mode operation capability (simultaneous emissions with minimum cross talk interference), and the possibility of applying an efficient detection algorithm that notably decreases the hardware resource requirements.

  11. Efficient transfer of an arbitrary qutrit state in circuit quantum electrodynamics.

    PubMed

    Liu, Tong; Xiong, Shao-Jie; Cao, Xiao-Zhi; Su, Qi-Ping; Yang, Chui-Ping

    2015-12-01

    Compared with a qubit, a qutrit (i.e., three-level quantum system) has a larger Hilbert space and thus can be used to encode more information in quantum information processing and communication. Here, we propose a method to transfer an arbitrary quantum state between two flux qutrits coupled to two resonators. This scheme is simple because it only requires two basic operations. The state-transfer operation can be performed fast because only resonant interactions are used. Numerical simulations show that the high-fidelity transfer of quantum states between the two qutrits is feasible with current circuit-QED technology. This scheme is quite general and can be applied to accomplish the same task for other solid-state qutrits coupled to resonators.

  12. Proof of the Feasibility of Coherent and Incoherent Schemes for Pumping a Gamma-Ray Laser

    DTIC Science & Technology

    1987-10-01

    The University of Texas at Dallas , - Center for Quantum Electronics The Gamma-Ray Laser Project Quarterly Report July-September 1987 00 I~ -W-IN. -G...FEASIBILITY OF COHERENT AND INCOHERENT SCHEMES FOR PUMPING A GAMMA-RAY LASER Principal Investigator: Carl B. Collins The University of Texas at Dallas...FOR PUM4PING A GAMMA-RAY LASER I plar9@ftWN0116. "avail? "wU0069 AI. CONTe3C ON ORVIN le"R C. B. Collins N00014-86-C-2488 0. 090?0O144 091ANIZ*?TOU Ist

  13. Multiparty Quantum Direct Secret Sharing of Classical Information with Bell States and Bell Measurements

    NASA Astrophysics Data System (ADS)

    Song, Yun; Li, Yongming; Wang, Wenhua

    2018-02-01

    This paper proposed a new and efficient multiparty quantum direct secret sharing (QDSS) by using swapping quantum entanglement of Bell states. In the proposed scheme, the quantum correlation between the possible measurement results of the members (except dealer) and the original local unitary operation encoded by the dealer was presented. All agents only need to perform Bell measurements to share dealer's secret by recovering dealer's operation without performing any unitary operation. Our scheme has several advantages. The dealer is not required to retain any photons, and can further share a predetermined key instead of a random key to the agents. It has high capacity as two bits of secret messages can be transmitted by an EPR pair and the intrinsic efficiency approaches 100%, because no classical bit needs to be transmitted except those for detection. Without inserting any checking sets for detecting the eavesdropping, the scheme can resist not only the existing attacks, but also the cheating attack from the dishonest agent.

  14. "Interactive Classification Technology"

    NASA Technical Reports Server (NTRS)

    deBessonet, Cary

    1999-01-01

    The investigators are upgrading a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the technologies to be used in automated reasoning and interactive classification systems. The overall goals of the project are: a) the enhancement of the representation language SL to accommodate multiple perspectives and a wider range of meaning; b) the development of a sufficient set of operators to enable the interpreter of SL to handle representations of basic cognitive acts; and c) the development of a default inference scheme to operate over SL notation as it is encoded. As to particular goals the first-year work plan focused on inferencing and.representation issues, including: 1) the development of higher level cognitive/ classification functions and conceptual models for use in inferencing and decision making; 2) the specification of a more detailed scheme of defaults and the enrichment of SL notation to accommodate the scheme; and 3) the adoption of additional perspectives for inferencing.

  15. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  16. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  17. The Hawke's Bay Condom Card Scheme: a qualitative study of the views of service providers on increased, discreet access for youth to free condoms.

    PubMed

    Ryder, Hollie; Aspden, Trudi; Sheridan, Janie

    2015-12-01

    The incidence of sexually transmitted infections and unplanned pregnancies in adolescence is of concern. The Hawke's Bay District Health Board, New Zealand, set up a pilot condom card scheme ('the Scheme') to allow 13- to 24-year-olds, deemed suitable for the Scheme, to access free condoms from pharmacies on presentation of a Condom Card. Our study explored the views of service providers of a pilot Condom Card Scheme. Qualitative interviews were conducted with 17 service providers (nurses, pharmacists, pharmacy staff) between February and April 2013. Our findings showed that the Scheme was viewed positively by service providers, who indicated almost universal support for the Scheme to continue. However, participants noted a perceived lack of advertising, low number of sites for collection of condoms, lack of flexibility of the Scheme's criteria relating to who could access the scheme and issues with some pharmacy service providers, all of which led to a number of recommendations for improving the Scheme. The views of service providers indicate broad support for the continuation of the Scheme. Canvassing young people's suggestions for improving the Scheme is also essential. © 2015 Royal Pharmaceutical Society.

  18. A Guide to Systematic Planning for Vocational and Technical Schools. Research 22.

    ERIC Educational Resources Information Center

    Meckley, Richard F.; And Others

    A school planning scheme involving 46 principle activities which occur over a 38-month period is presented. This scheme was developed for individuals responsible for the planning of vocational and technical schools, i.e., supervisors, state staff, university school plant planners, architects, and local school administrators. The activities…

  19. Discovery of User-Oriented Class Associations for Enriching Library Classification Schemes.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh

    2002-01-01

    Presents a user-based approach to exploring the possibility of adding user-oriented class associations to hierarchical library classification schemes. Classes not grouped in the same subject hierarchies yet relevant to users' knowledge are obtained by analyzing a log book of a university library's circulation records, using collaborative filtering…

  20. Argumentation and Participation Patterns in General Chemistry Peer-Led Sessions

    ERIC Educational Resources Information Center

    Kulatunga, Ushiri; Moog, Richard S.; Lewis, Jennifer E.

    2013-01-01

    This article focuses on the use of Toulmin's argumentation scheme to investigate the characteristics of student group argumentation in Peer-Led Guided Inquiry sessions for a General Chemistry I course. A coding scheme based on Toulmin's [Toulmin [1958] "The uses of argument." Cambridge: Cambridge University Press] argumentation…

  1. Build Your Own Particle Smasher: The Royal Society Partnership Grants Scheme

    ERIC Educational Resources Information Center

    Education in Science, 2012

    2012-01-01

    This article features the project, "Build Your Own Particle Smasher" and shares how to build a particle smasher project. A-level and AS-level students from Trinity Catholic School have built their own particle smashers, in collaboration with Nottingham Trent University, as part of The Royal Society's Partnership Grants Scheme. The…

  2. Evaluation of Future Blogs. Final Report

    ERIC Educational Resources Information Center

    Haines, Ben; Straw, Suzanne

    2008-01-01

    Future Blogs was developed through collaboration between the Royal Society of Chemistry and The Brightside Trust. It is an innovative e-mentoring scheme that links pupils studying chemistry with mentors from universities or industry and is based on the Bright Journals e-mentoring programme (an e-mentoring scheme targeted at 14-18 year olds…

  3. The impact of the National Treatment Purchase Fund on numbers of core urology training cases at University Hospital Galway.

    PubMed

    Harney, T J; Dowling, C M; Brady, C M

    2011-06-01

    Since the National Treatment Purchase Fund (NTPF) scheme was introduced in 2002, public patients waiting longer than three months for investigations and treatment are offered care in the private medical sector. Our aim was to assess the impact of the NTPF scheme on the number of training cases performed at University Hospital Galway (UHG). The number and type of urological procedures performed in the private medical sector under the NTFP scheme in 2008 were obtained from the UHG waiting list office. The number of these procedures performed on public patients by trainees at UHG in 2008 was determined retrospectively by reviewing theatre records. A significant number of core urology procedures were performed in the private sector via the NTPF scheme. Cancer centre designation and implementation of the EWTD will also place further pressures on urological training opportunities in Ireland. Copyright © 2010 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  4. NW Lab for Integrated Systems

    DTIC Science & Technology

    1991-05-24

    hardware data compressors. [BuBo89, BuBo90, BuBo9l] The data compression scheme of Ziv and Lempel repeatedly matches the input stream to words contained...most significantly reduce dictionary size requirements in practical Ziv - Lempel encoders, without compromising; compression . How- ever, the additional...achieve a fixed 20MB/sec data rate. Thus, our Ziv - Lempel implementation realizes a speed improvement of 10 to 20 times that of the fastest recent

  5. Classical and quantum communication without a shared reference frame.

    PubMed

    Bartlett, Stephen D; Rudolph, Terry; Spekkens, Robert W

    2003-07-11

    We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame at a rate that asymptotically approaches one classical bit or one encoded qubit per transmitted qubit. We present an optical scheme to communicate classical bits without a shared reference frame using entangled photon pairs and linear optical Bell state measurements.

  6. Spherical hashing: binary code embedding with hyperspheres.

    PubMed

    Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui

    2015-11-01

    Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.

  7. Going beyond 4 Gbps data rate by employing RGB laser diodes for visible light communication.

    PubMed

    Janjua, Bilal; Oubei, Hassan M; Durán Retamal, Jose R; Ng, Tien Khee; Tsai, Cheng-Ting; Wang, Huai-Yung; Chi, Yu-Chieh; Kuo, Hao-Chung; Lin, Gong-Ru; He, Jr-Hau; Ooi, Boon S

    2015-07-13

    With increasing interest in visible light communication, the laser diode (LD) provides an attractive alternative, with higher efficiency, shorter linewidth and larger bandwidth for high-speed visible light communication (VLC). Previously, more than 3 Gbps data rate was demonstrated using LED. By using LDs and spectral-efficient orthogonal frequency division multiplexing encoding scheme, significantly higher data rates has been achieved in this work. Using 16-QAM modulation scheme, in conjunction with red, blue and green LDs, data rates of 4.4 Gbps, 4 Gbps and 4 Gbps, with the corresponding BER/SNR/EVM of 3.3 × 10⁻³/15.3/17.9, 1.4 × 10⁻³/16.3/15.4 and 2.8 × 10⁻³/15.5/16.7were obtained over transmission distance of ~20 cm. We also simultaneously demonstrated white light emission using red, blue and green LDs, after passing through a commercially available diffuser element. Our work highlighted that a tradeoff exists in operating the blue LDs at optimum bias condition while maintaining good color temperature. The best results were obtained when encoding red LDs which gave both the strongest received signal amplitude and white light with CCT value of 5835K.

  8. Parallel protein secondary structure prediction based on neural networks.

    PubMed

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  9. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  10. Low-cost and high-resolution interrogation scheme for LPG-based temperature sensor

    NASA Astrophysics Data System (ADS)

    Venkata Reddy, M.; Srimannarayana, K.; Venkatappa Rao, T.; Vengal Rao, P.

    2015-09-01

    A low-cost and high-resolution interrogation scheme for a long-period fiber grating (LPG) temperature sensor with adjustable temperature range has been designed, developed and tested. In general LPGs are widely used as optical sensors and can be used as optical edge filters to interrogate the wavelength encoded signal from sensors such as fiber Bragg grating (FBG) by converting it into intensity modulated signal. But the interrogation of LPG sensors using FBG is a bit novel and it is to be studied experimentally. The sensor works based on measurement of shift in attenuation band of LPG corresponding to the applied temperature. The wavelength shift of LPG attenuation band is monitored using an optical spectrum analyser (OSA). Further the bulk and expensive OSA is replaced with a low-cost interrogation system that employ an FBG, photodiode and a transimpedance amplifier (TIA). The designed interrogation scheme makes the system low-cost, fast in response, and also enhances its resolution up to 0.1°C. The measurable temperature range using the proposed scheme is limited to 120 °C. However this range can be shifted within 15-450 °C by means of adjusting the Bragg wavelength of FBG.

  11. Holographic memory system based on projection recording of computer-generated 1D Fourier holograms.

    PubMed

    Betin, A Yu; Bobrinev, V I; Donchenko, S S; Odinokov, S B; Evtikhiev, N N; Starikov, R S; Starikov, S N; Zlokazov, E Yu

    2014-10-01

    Utilization of computer generation of holographic structures significantly simplifies the optical scheme that is used to record the microholograms in a holographic memory record system. Also digital holographic synthesis allows to account the nonlinear errors of the record system to improve the microholograms quality. The multiplexed record of holograms is a widespread technique to increase the data record density. In this article we represent the holographic memory system based on digital synthesis of amplitude one-dimensional (1D) Fourier transform holograms and the multiplexed record of these holograms onto the holographic carrier using optical projection scheme. 1D Fourier transform holograms are very sensitive to orientation of the anamorphic optical element (cylindrical lens) that is required for encoded data object reconstruction. The multiplex record of several holograms with different orientation in an optical projection scheme allowed reconstruction of the data object from each hologram by rotating the cylindrical lens on the corresponding angle. Also, we discuss two optical schemes for the recorded holograms readout: a full-page readout system and line-by-line readout system. We consider the benefits of both systems and present the results of experimental modeling of 1D Fourier holograms nonmultiplex and multiplex record and reconstruction.

  12. Spin dynamics and Kondo physics in optical tweezers

    NASA Astrophysics Data System (ADS)

    Lin, Yiheng; Lester, Brian J.; Brown, Mark O.; Kaufman, Adam M.; Long, Junling; Ball, Randall J.; Isaev, Leonid; Wall, Michael L.; Rey, Ana Maria; Regal, Cindy A.

    2016-05-01

    We propose to use optical tweezers as a toolset for direct observation of the interplay between quantum statistics, kinetic energy and interactions, and thus implement minimum instances of the Kondo lattice model in systems with few bosonic rubidium atoms. By taking advantage of strong local exchange interactions, our ability to tune the spin-dependent potential shifts between the two wells and complete control over spin and motional degrees of freedom, we design an adiabatic tunneling scheme that efficiently creates a spin-singlet state in one well starting from two initially separated atoms (one atom per tweezer) in opposite spin state. For three atoms in a double-well, two localized in the lowest vibrational mode of each tweezer and one atom in an excited delocalized state, we plan to use similar techniques and observe resonant transfer of two-atom singlet-triplet states between the wells in the regime when the exchange coupling exceeds the mobile atom hopping. Moreover, we argue that such three-atom double-tweezers could potentially be used for quantum computation by encoding logical qubits in collective spin and motional degrees of freedom. Current address: Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shue, Craig A; Gupta, Prof. Minaxi

    Users are being tracked on the Internet more than ever before as Web sites and search engines gather pieces of information sufficient to identify and study their behavior. While many existing schemes provide strong anonymity, they are inappropriate when high bandwidth and low latency are required. In this work, we explore an anonymity scheme for end hosts whose performance makes it possible to have it always on. The scheme leverages the natural grouping of hosts in the same subnet and the universally available broadcast primitive to provide anonymity at line speeds. Our scheme is strongly resistant against all active ormore » passive adversaries as long as they are outside the subnet. Even within the subnet, our scheme provides reasonable resistance against adversaries, providing anonymity that is suitable for common Internet applications.« less

  14. ESSG-based global spatial reference frame for datasets interrelation

    NASA Astrophysics Data System (ADS)

    Yu, J. Q.; Wu, L. X.; Jia, Y. J.

    2013-10-01

    To know well about the highly complex earth system, a large volume of, as well as a large variety of, datasets on the planet Earth are being obtained, distributed, and shared worldwide everyday. However, seldom of existing systems concentrates on the distribution and interrelation of different datasets in a common Global Spatial Reference Frame (GSRF), which holds an invisble obstacle to the data sharing and scientific collaboration. Group on Earth Obeservation (GEO) has recently established a new GSRF, named Earth System Spatial Grid (ESSG), for global datasets distribution, sharing and interrelation in its 2012-2015 WORKING PLAN.The ESSG may bridge the gap among different spatial datasets and hence overcome the obstacles. This paper is to present the implementation of the ESSG-based GSRF. A reference spheroid, a grid subdvision scheme, and a suitable encoding system are required to implement it. The radius of ESSG reference spheroid was set to the double of approximated Earth radius to make datasets from different areas of earth system science being covered. The same paramerters of positioning and orienting as Earth Centred Earth Fixed (ECEF) was adopted for the ESSG reference spheroid to make any other GSRFs being freely transformed into the ESSG-based GSRF. Spheroid degenerated octree grid with radius refiment (SDOG-R) and its encoding method were taken as the grid subdvision and encoding scheme for its good performance in many aspects. A triple (C, T, A) model is introduced to represent and link different datasets based on the ESSG-based GSRF. Finally, the methods of coordinate transformation between the ESSGbased GSRF and other GSRFs were presented to make ESSG-based GSRF operable and propagable.

  15. Improved classical and quantum random access codes

    NASA Astrophysics Data System (ADS)

    Liabøtrø, O.

    2017-05-01

    A (quantum) random access code ((Q)RAC) is a scheme that encodes n bits into m (qu)bits such that any of the n bits can be recovered with a worst case probability p >1/2 . We generalize (Q)RACs to a scheme encoding n d -levels into m (quantum) d -levels such that any d -level can be recovered with the probability for every wrong outcome value being less than 1/d . We construct explicit solutions for all n ≤d/2m-1 d -1 . For d =2 , the constructions coincide with those previously known. We show that the (Q)RACs are d -parity oblivious, generalizing ordinary parity obliviousness. We further investigate optimization of the success probabilities. For d =2 , we use the measure operators of the previously best-known solutions, but improve the encoding states to give a higher success probability. We conjecture that for maximal (n =4m-1 ,m ,p ) QRACs, p =1/2 {1 +[(√{3}+1)m-1 ] -1} is possible, and show that it is an upper bound for the measure operators that we use. We then compare (n ,m ,pq) QRACs with classical (n ,2 m ,pc) RACs. We can always find pq≥pc , but the classical code gives information about every input bit simultaneously, while the QRAC only gives information about a subset. For several different (n ,2 ,p ) QRACs, we see the same trade-off, as the best p values are obtained when the number of bits that can be obtained simultaneously is as small as possible. The trade-off is connected to parity obliviousness, since high certainty information about several bits can be used to calculate probabilities for parities of subsets.

  16. Post-translational modification of Rauscher leukemia virus precursor polyproteins encoded by the gag gene.

    PubMed Central

    Schultz, A M; Rabin, E H; Oroszlan, S

    1979-01-01

    Post-translational modifications of retrovirus gag gene-encoded polyproteins include proteolytic cleavage, phosphorylation, and glycosylation. To study the sequence of these events, we labeled JLS-V9 cells chronically infected with Rauscher murine leukemia virus in pulse-chase experiments with the radioactive precursors [35S]methionine, [14C]mannose, [3H]glucosamine, and [32P]phosphate. Newly synthesized gag polyproteins which incorporated label, and the modified products derived from them, were identified by immunoprecipitation of cell lysates with anti-p30 rabbit serum, followed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and autoradiography. Pulse-chase experiments were carried out in the presence as well as in the absence of tunicamycin, an inhibitor of glycosylation. Among the three major polyproteins synthesized in the absence of tunicamycin, two were found to be glycosylated but not phosphorylated. These were designated gPr80gag and gP94gag. Both shared identical [35S]methionine peptides with Pr65gag and p30. Of the two nonglycosylated precursors, Pr65gag and Pr75gag, only Pr65gag was found to be detectably phosphorylated, and Pr75gag could be readily identified only when glycosylation was inhibited. On the basis of these results, a scheme for the post-translational modification of gag polyproteins is proposed. According to this scheme the gag gene-encoded polyproteins are processed from a common precursor, Pr75gag, by two divergent pathways: one leading through the intermediate Pr65gag to internal virion components via cleavage and phosphorylation and the other via tunicamycin-sensitive mannosylation to the intermediate gPr80gag, which is further glycosylated to yield cell surface polyprotein gP94gag. Images PMID:480454

  17. ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1990-01-01

    For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.

  18. Quantum Monte Carlo calculations of NiO

    NASA Astrophysics Data System (ADS)

    Maezono, Ryo; Towler, Mike D.; Needs, Richard. J.

    2008-03-01

    We describe variational and diffusion quantum Monte Carlo (VMC and DMC) calculations [1] of NiO using a 1024-electron simulation cell. We have used a smooth, norm-conserving, Dirac-Fock pseudopotential [2] in our work. Our trial wave functions were of Slater-Jastrow form, containing orbitals generated in Gaussian-basis UHF periodic calculations. Jastrow factor is optimized using variance minimization with optimized cutoff lengths using the same scheme as our previous work. [4] We apply the lattice regulated scheme [5] to evaluate non-local pseudopotentials in DMC and find the scheme improves the smoothness of the energy-volume curve. [1] CASINO ver.2.1 User Manual, University of Cambridge (2007). [2] J.R. Trail et.al., J. Chem. Phys. 122, 014112 (2005). [3] CRYSTAL98 User's Manual, University of Torino (1998). [4] Ryo Maezono et.al., Phys. Rev. Lett., 98, 025701 (2007). [5] Michele Casula, Phys. Rev. B 74, 161102R (2006).

  19. [Standardization in laboratory hematology by participating in external quality assurance programs].

    PubMed

    Nazor, Aida; Siftar, Zoran; Flegar-Mestrić, Zlata

    2011-09-01

    Since 1985, Department of Clinical Chemistry and Laboratory Medicine, Merkur University Hospital, Zagreb, has been participating in the International External Quality Assessment Scheme for Hematology (IEQAS-H) organized by the World Health Organization (WHO). Owing to very good results, in 1987 the Department received a certificate of participation in this control scheme. Department has been cooperating in the external quality assessment program in laboratory hematology which has been continuously performed in Croatia since 1986 by the Committee for External Quality Assessment Schemes under the auspices of the Croatian Society of Medical Biochemists and School of Pharmacy and Biochemistry, University of Zagreb. Nowadays, 186 medical biochemical laboratories are included in the National External Quality Assessment program, which is performed three times per year. Our Department has participated in the international projects of the European Committee for External Quality Assurance Programs in Laboratory Medicine (EQALM).

  20. The "Universal" in UHC and Ghana's National Health Insurance Scheme: policy and implementation challenges and dilemmas of a lower middle income country.

    PubMed

    Agyepong, Irene Akua; Abankwah, Daniel Nana Yaw; Abroso, Angela; Chun, ChangBae; Dodoo, Joseph Nii Otoe; Lee, Shinye; Mensah, Sylvester A; Musah, Mariam; Twum, Adwoa; Oh, Juwhan; Park, Jinha; Yang, DoogHoon; Yoon, Kijong; Otoo, Nathaniel; Asenso-Boadi, Francis

    2016-09-21

    Despite universal population coverage and equity being a stated policy goal of its NHIS, over a decade since passage of the first law in 2003, Ghana continues to struggle with how to attain it. The predominantly (about 70 %) tax funded NHIS currently has active enrolment hovering around 40 % of the population. This study explored in-depth enablers and barriers to enrolment in the NHIS to provide lessons and insights for Ghana and other low and middle income countries (LMIC) into attaining the goal of universality in Universal Health Coverage (UHC). We conducted a cross sectional mixed methods study of an urban and a rural district in one region of Southern Ghana. Data came from document review, analysis of routine data on enrolment, key informant in-depth interviews with local government, regional and district insurance scheme and provider staff and community member in-depth interviews and focus group discussions. Population coverage in the NHIS in the study districts was not growing towards near universal because of failure of many of those who had ever enrolled to regularly renew annually as required by the NHIS policy. Factors facilitating and enabling enrolment were driven by the design details of the scheme that emanate from national level policy and program formulation, frontline purchaser and provider staff implementation arrangements and contextual factors. The factors inter-related and worked together to affect client experience of the scheme, which were not always the same as the declared policy intent. This then also affected the decision to enrol and stay enrolled. UHC policy and program design needs to be such that enrolment is effectively compulsory in practice. It also requires careful attention and responsiveness to actual and potential subscriber, purchaser and provider (stakeholder) incentives and related behaviour generated at implementation levels.

  1. Typing of Panton-Valentine leukocidin-encoding phages carried by methicillin-susceptible and methicillin-resistant Staphylococcus aureus from Italy.

    PubMed

    Sanchini, A; Del Grosso, M; Villa, L; Ammendolia, M G; Superti, F; Monaco, M; Pantosti, A

    2014-11-01

    Panton-Valentine leukocidin (PVL) is the hallmark of community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) but can also be found in methicillin-susceptible S. aureus (MSSA) sharing pathogenic and epidemiological characteristics of CA-MRSA. PVL is encoded by two co-transcribed genes that are carried by different staphylococcal bacteriophages. We applied an extended PCR-based typing scheme for the identification of two morphological groups (elongated-head group and icosahedral-head group I phages) and specific PVL phage types in S. aureus isolates recovered in Italy. We examined 48 PVL-positive isolates (25 MSSA and 23 MRSA) collected from different hospital laboratories from April 2005 to May 2011. spa typing, multilocus sequence typing and staphylococcal cassette chromosome mec typing were applied to categorize the isolates. Phage typeability was 48.0% in MSSA and 91.3% in MRSA, highlighting the limitation of the PCR typing scheme when applied to PVL-positive MSSA. Five different PVL phages and two variants of a known phage were detected, the most prevalent being ΦSa2usa, recovered in 15 out of 48 (31.2%) isolates, and carried by both MSSA and MRSA belonging to CC8 and CC5. The recently described ΦTCH60 was recovered in four isolates. A PVL phage (ΦSa119) from an ST772 MRSA, that was not detected using the previous typing scheme, was sequenced, and new primers were designed for the identification of the icosahedral-head group II PVL phages present in ST772 and ST59 MRSA. A comprehensive PVL-phage typing can contribute to the understanding of the epidemiology and evolution of PVL-positive MSSA and MRSA. © 2014 The Authors Clinical Microbiology and Infection © 2014 European Society of Clinical Microbiology and Infectious Diseases.

  2. Structural Bridges through Fold Space.

    PubMed

    Edwards, Hannah; Deane, Charlotte M

    2015-09-01

    Several protein structure classification schemes exist that partition the protein universe into structural units called folds. Yet these schemes do not discuss how these units sit relative to each other in a global structure space. In this paper we construct networks that describe such global relationships between folds in the form of structural bridges. We generate these networks using four different structural alignment methods across multiple score thresholds. The networks constructed using the different methods remain a similar distance apart regardless of the probability threshold defining a structural bridge. This suggests that at least some structural bridges are method specific and that any attempt to build a picture of structural space should not be reliant on a single structural superposition method. Despite these differences all representations agree on an organisation of fold space into five principal community structures: all-α, all-β sandwiches, all-β barrels, α/β and α + β. We project estimated fold ages onto the networks and find that not only are the pairings of unconnected folds associated with higher age differences than bridged folds, but this difference increases with the number of networks displaying an edge. We also examine different centrality measures for folds within the networks and how these relate to fold age. While these measures interpret the central core of fold space in varied ways they all identify the disposition of ancestral folds to fall within this core and that of the more recently evolved structures to provide the peripheral landscape. These findings suggest that evolutionary information is encoded along these structural bridges. Finally, we identify four highly central pivotal folds representing dominant topological features which act as key attractors within our landscapes.

  3. A neural coding scheme reproducing foraging trajectories

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-12-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  4. Improve load balancing and coding efficiency of tiles in high efficiency video coding by adaptive tile boundary

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin

    2017-01-01

    High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.

  5. High-Fidelity Preservation of Quantum Information During Trapped-Ion Transport

    NASA Astrophysics Data System (ADS)

    Kaufmann, Peter; Gloger, Timm F.; Kaufmann, Delia; Johanning, Michael; Wunderlich, Christof

    2018-01-01

    A promising scheme for building scalable quantum simulators and computers is the synthesis of a scalable system using interconnected subsystems. A prerequisite for this approach is the ability to faithfully transfer quantum information between subsystems. With trapped atomic ions, this can be realized by transporting ions with quantum information encoded into their internal states. Here, we measure with high precision the fidelity of quantum information encoded into hyperfine states of a Yb171 + ion during ion transport in a microstructured Paul trap. Ramsey spectroscopy of the ion's internal state is interleaved with up to 4000 transport operations over a distance of 280 μ m each taking 12.8 μ s . We obtain a state fidelity of 99.9994 (-7+6) % per ion transport.

  6. One-shot and aberration-tolerable homodyne detection for holographic storage readout through double-frequency grating-based lateral shearing interferometry.

    PubMed

    Yu, Yeh-Wei; Xiao, Shuai; Cheng, Chih-Yuan; Sun, Ching-Cherng

    2016-05-16

    A simple method to decode the stored phase signal of volume holographic data storage with adequate wave aberration tolerance is highly demanded. We proposed and demonstrated a one-shot scheme to decode a binary-phase encoding signal through double-frequency-grating based shearing interferometry (DFGSI). The lateral shearing amount is dependent on the focal length of the collimated lens and the frequency difference between the gratings. Diffracted waves with phase encoding were successfully decoded through experimentation. An optical model for the DFGSI was built to analyze phase-error induction and phase-difference control by shifting the double-frequency grating longitudinally and laterally, respectively. The optical model was demonstrated experimentally. Finally, a high aberration tolerance of the DFGSI was demonstrated using the optical model.

  7. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  8. Twisted Acoustics: Metasurface-Enabled Multiplexing and Demultiplexing.

    PubMed

    Jiang, Xue; Liang, Bin; Cheng, Jian-Chun; Qiu, Cheng-Wei

    2018-05-01

    Metasurfaces are used to enable acoustic orbital angular momentum (a-OAM)-based multiplexing in real-time, postprocess-free, and sensor-scanning-free fashions to improve the bandwidth of acoustic communication, with intrinsic compatibility and expandability to cooperate with other multiplexing schemes. The metasurface-based communication relying on encoding information onto twisted beams is numerically and experimentally demonstrated by realizing real-time picture transfer, which differs from existing static data transfer by encoding data onto OAM states. With the advantages of real-time transmission, passive and instantaneous data decoding, vanishingly low loss, compact size, and high transmitting accuracy, the study of a-OAM-based information transfer with metasurfaces offers new route to boost the capacity of acoustic communication and great potential to profoundly advance relevant fields. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Determinants of enrollment of informal sector workers in cooperative based health scheme in Bangladesh

    PubMed Central

    Sarker, Abdur Razzaque; Sultana, Marufa; Mahumud, Rashidul Alam; Ahmed, Sayem; Islam, Ziaul; Morton, Alec; Khan, Jahangir A. M.

    2017-01-01

    Background Providing access to affordable health care for the informal sector remains a considerable challenge for low income countries striving to make progress towards universal health coverage. The objective of the study is to identify the factors shaping the decision to enroll in a cooperative based health scheme for informal workers in Bangladesh and also help to identify the features of informal workers without health schemes and their likelihood of being insured. Methods Data were derived from a cross-sectional in-house survey within the catchment area of a cooperative based health scheme in Bangladesh during April–June 2014, covering a total of 784 households (458 members and 326 non-members). Multivariate logistic regression model was used to identify factors associated with cooperative based health scheme and explanatory variables. Findings This study found that a number of factors were significant determinants of health scheme participation including sex of household head, household composition, occupational category as well as involvement social financial safety net programs. Conclusion Findings from this study can be suggestive for policy-makers interested in scaling up health insurance for informal workers in Bangladesh. Shared funding from this large informal sector can generate new resources for healthcare, which is in line with the healthcare financing strategy of Bangladesh as well as the recommendation of the World Health Organization for developing social health insurance as part of the path to Universal Health Coverage. PMID:28750052

  10. From the Campus to the Cloud: The Online Peer Assisted Learning Scheme

    ERIC Educational Resources Information Center

    Beaumont, Tim J.; Mannion, Aaron P.; Shen, Brice O.

    2012-01-01

    This paper reports on an online version of Peer Assisted Study Sessions (PASS), also known as Supplemental Instruction (SI), which was trialled in two subjects in the University of Melbourne in 2011. The program, named the Online Peer Assisted Learning (OPAL) scheme, was implemented with the aims of extending the benefits of a successful peer…

  11. Developing and Integrating a Web-Based Quiz into the Curriculum.

    ERIC Educational Resources Information Center

    Carbone, Angela; Schendzielorz, Peter

    In 1996, the Department of Computer Science at Monash University (Australia) implemented a First Year Advanced Students' Project Scheme aimed at extending and stimulating its best first year students. The goal of the scheme was to give students the opportunity to work on a project that best suited their needs and captured their interests. One of…

  12. Web-Based Course Delivery and Administration Using Scheme.

    ERIC Educational Resources Information Center

    Salustri, Filippo A.

    This paper discusses the use at the University of Windsor (Ontario) of a small World Wide Web-based tool for course delivery and administration called HAL (HTML-based Administrative Lackey), written in the Scheme programming language. This tool was developed by the author to provide Web-based services for a large first-year undergraduate course in…

  13. Success without "A" Pass: An Educational Experiment Revisited

    ERIC Educational Resources Information Center

    Nash, Mary

    2011-01-01

    In 1963, the University of Sussex inaugurated an innovative Early Leavers Scheme in response to two government reports which confirmed that it was still the norm for talented working-class children to leave school aged 15 or 16 and indicated that the hopes of the 1944 Education Act were as yet unfulfilled. This article explores what the scheme has…

  14. An Analysis of the Factors That Affect Engagement of Higher Education Teachers with an Institutional Professional Development Scheme

    ERIC Educational Resources Information Center

    Botham, Kathryn Ann

    2018-01-01

    An evaluation project was carried out to consider the factors that influence university teachers engagement with an institutional professional development scheme. Data was collected via an online questionnaire followed up by semi-structured interviews. This paper will consider those factors that encourage and act as barriers to engagement. The…

  15. Mobility as a Continuum: European Commission Mobility Policies for Schools and Higher Education

    ERIC Educational Resources Information Center

    Dvir, Yuval; Yemini, Miri

    2017-01-01

    This study explores the rationale and aims of European Commission (EC) mobility programmes for schools and higher education systems, namely the Comenius and the European Action Scheme for the Mobility of University Students (ERASMUS) funding schemes. Our findings indicate that the aims, rationales and means of mobility programmes for the school…

  16. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2007-12-01

    We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  17. A hybrid quantum eraser scheme for characterization of free-space and fiber communication channels

    NASA Astrophysics Data System (ADS)

    Nape, Isaac; Kyeremah, Charlotte; Vallés, Adam; Rosales-Guzmán, Carmelo; Buah-Bassuah, Paul K.; Forbes, Andrew

    2018-02-01

    We demonstrate a simple projective measurement based on the quantum eraser concept that can be used to characterize the disturbances of any communication channel. Quantum erasers are commonly implemented as spatially separated path interferometric schemes. Here we exploit the advantages of redefining the which-path information in terms of spatial modes, replacing physical paths with abstract paths of orbital angular momentum (OAM). Remarkably, vector modes (natural modes of free-space and fiber) have a non-separable feature of spin-orbit coupled states, equivalent to the description of two independently marked paths. We explore the effects of fiber perturbations by probing a step-index optical fiber channel with a vector mode, relevant to high-order spatial mode encoding of information for ultra-fast fiber communications.

  18. An additional study and implementation of tone calibrated technique of modulation

    NASA Technical Reports Server (NTRS)

    Rafferty, W.; Bechtel, L. K.; Lay, N. E.

    1985-01-01

    The Tone Calibrated Technique (TCT) was shown to be theoretically free from an error floor, and is only limited, in practice, by implementation constraints. The concept of the TCT transmission scheme along with a baseband implementation of a suitable demodulator is introduced. Two techniques for the generation of the TCT signal are considered: a Manchester source encoding scheme (MTCT) and a subcarrier based technique (STCT). The results are summarized for the TCT link computer simulation. The hardware implementation of the MTCT system is addressed and the digital signal processing design considerations involved in satisfying the modulator/demodulator requirements are outlined. The program findings are discussed and future direction are suggested based on conclusions made regarding the suitability of the TCT system for the transmission channel presently under consideration.

  19. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  20. A Novel Fast and Secure Approach for Voice Encryption Based on DNA Computing

    NASA Astrophysics Data System (ADS)

    Kakaei Kate, Hamidreza; Razmara, Jafar; Isazadeh, Ayaz

    2018-06-01

    Today, in the world of information communication, voice information has a particular importance. One way to preserve voice data from attacks is voice encryption. The encryption algorithms use various techniques such as hashing, chaotic, mixing, and many others. In this paper, an algorithm is proposed for voice encryption based on three different schemes to increase flexibility and strength of the algorithm. The proposed algorithm uses an innovative encoding scheme, the DNA encryption technique and a permutation function to provide a secure and fast solution for voice encryption. The algorithm is evaluated based on various measures including signal to noise ratio, peak signal to noise ratio, correlation coefficient, signal similarity and signal frequency content. The results demonstrate applicability of the proposed method in secure and fast encryption of voice files

  1. A Spatiotemporal-Chaos-Based Cryptosystem Taking Advantage of Both Synchronous and Self-Synchronizing Schemes

    NASA Astrophysics Data System (ADS)

    Lü, Hua-Ping; Wang, Shi-Hong; Li, Xiao-Wen; Tang, Guo-Ning; Kuang, Jin-Yu; Ye, Wei-Ping; Hu, Gang

    2004-06-01

    Two-dimensional one-way coupled map lattices are used for cryptography where multiple space units produce chaotic outputs in parallel. One of the outputs plays the role of driving for synchronization of the decryption system while the others perform the function of information encoding. With this separation of functions the receiver can establish a self-checking and self-correction mechanism, and enjoys the advantages of both synchronous and self-synchronizing schemes. A comparison between the present system with the system of advanced encryption standard (AES) is presented in the aspect of channel noise influence. Numerical investigations show that our system is much stronger than AES against channel noise perturbations, and thus can be better used for secure communications with large channel noise.

  2. Measurement-based quantum computation on two-body interacting qubits with adiabatic evolution.

    PubMed

    Kyaw, Thi Ha; Li, Ying; Kwek, Leong-Chuan

    2014-10-31

    A cluster state cannot be a unique ground state of a two-body interacting Hamiltonian. Here, we propose the creation of a cluster state of logical qubits encoded in spin-1/2 particles by adiabatically weakening two-body interactions. The proposal is valid for any spatial dimensional cluster states. Errors induced by thermal fluctuations and adiabatic evolution within finite time can be eliminated ensuring fault-tolerant quantum computing schemes.

  3. Design and implementation of the one-step MSD adder of optical computer.

    PubMed

    Song, Kai; Yan, Liping

    2012-03-01

    On the basis of the symmetric encoding algorithm for the modified signed-digit (MSD), a 7*7 truth table that can be realized with optical methods was developed. And based on the truth table, the optical path structures and circuit implementations of the one-step MSD adder of ternary optical computer (TOC) were designed. Experiments show that the scheme is correct, feasible, and efficient. © 2012 Optical Society of America

  4. Advanced Military Pay System Concepts. Evaluation of Opportunities through Information Technology.

    DTIC Science & Technology

    1980-07-01

    trans- mdtter (UART) to interface with a modem . The main processor was then responsible for input and output between main memory and the UART...digital, "run-length" encoding scheme which is very effective in reducing the amount of data to be transmitted. Machines of this type include a modem ...Output control as well as data compression will be combined with appropriate modems or interfaces to digital transmission channels and microprocessor

  5. POPE: Partial Order Preserving Encoding

    DTIC Science & Technology

    2016-09-09

    Alex X. Liu, Ann L. Wang, and Bezawada Bruhadeshwar. Fast range query processing with strong privacy protection for cloud computing . Proc. VLDB...States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article , or to allow others to do so...of these schemes the direc- tory in the persistent client storage depends on the full dataset. Thus 1We abuse notation and use OPE to refer to both

  6. Could the employment-based targeting approach serve Egypt in moving towards a social health insurance model?

    PubMed

    Shawky, S

    2010-06-01

    The current health insurance system in Egypt targets the productive population through an employment-based scheme bounded by a cost ceiling and focusing on curative care. Egypt Social Contract Survey data from 2005 were used to evaluate the impact of the employment-based scheme on health system accessibility and financing. Only 22.8% of the population in the productive age range (19-59 years) benefited from any health insurance scheme. The employment-based scheme covered 39.3% of the working population and was skewed towards urban areas, older people, females and the wealthier. It did not increase service utilization, but reduced out-of-pocket expenditure. Egypt should blend all health insurance schemes and adopt an innovative approach to reach universal coverage.

  7. Evaluation of Microphysics and Cumulus Schemes of WRF for Forecasting of Heavy Monsoon Rainfall over the Southeastern Hilly Region of Bangladesh

    NASA Astrophysics Data System (ADS)

    Hasan, Md Alfi; Islam, A. K. M. Saiful

    2018-05-01

    Accurate forecasting of heavy rainfall is crucial for the improvement of flood warning to prevent loss of life and property damage due to flash-flood-related landslides in the hilly region of Bangladesh. Forecasting heavy rainfall events is challenging where microphysics and cumulus parameterization schemes of Weather Research and Forecast (WRF) model play an important role. In this study, a comparison was made between observed and simulated rainfall using 19 different combinations of microphysics and cumulus schemes available in WRF over Bangladesh. Two severe rainfall events during 11th June 2007 and 24-27th June 2012, over the eastern hilly region of Bangladesh, were selected for performance evaluation using a number of indicators. A combination of the Stony Brook University microphysics scheme with Tiedtke cumulus scheme is found as the most suitable scheme for reproducing those events. Another combination of the single-moment 6-class microphysics scheme with New Grell 3D cumulus schemes also showed reasonable performance in forecasting heavy rainfall over this region. The sensitivity analysis confirms that cumulus schemes play a greater role than microphysics schemes for reproducing the heavy rainfall events using WRF.

  8. Resourse Allocation and Pricing Principles for a University Computer Centre. Working Paper Series Number 6819.

    ERIC Educational Resources Information Center

    Possen, Uri M.; And Others

    As an introduction, this paper presents a statement of the objectives of the university computing center (UCC) from the viewpoint of the university, the government, the typical user, and the UCC itself. The operating and financial structure of a UCC are described. Three main types of budgeting schemes are discussed: time allocation, pseudo-dollar,…

  9. Women Managers in Higher Education: Summary Report of the ACU-CHESS Steering Committee Meeting (London, England, United Kingdom, May 25-27, 1993).

    ERIC Educational Resources Information Center

    Commonwealth Secretariat, London (England).

    This publication describes a meeting of the Association of Commonwealth Universities (ACU) and the Commonwealth Higher Education Support Scheme (CHESS) to design an agenda to facilitate the advancement of women administrators in Commonwealth universities, to further use of their skills in contributing to university development, and to increasing…

  10. REVIEWS OF TOPICAL PROBLEMS: Elementary particles and cosmology (Metagalaxy and Universe)

    NASA Astrophysics Data System (ADS)

    Rozental', I. L.

    1997-08-01

    The close relation between cosmology and the theory of elementary particles is analyzed in the light of prospects of a unified field theory. The unity of their respective problems and solution methodologies is indicated. The difference between the concepts of 'Metagalaxy' and 'Universe' is emphasized and some possible schemes for estimating the size of the Universe are pointed out.

  11. [Communication subsystem design of tele-screening system for diabetic retinopathy].

    PubMed

    Chen, Jian; Pan, Lin; Zheng, Shaohua; Yu, Lun

    2013-12-01

    A design scheme of a tele-screening system for diabetic retinopathy (DR) has been proposed, especially the communication subsystem. The scheme uses serial communication module consisting of ARM 7 microcontroller and relays to connect remote computer and fundus camera, and also uses C++ programming language based on MFC to design the communication software consisting of therapy and diagnostic information module, video/audio surveillance module and fundus camera control module. The scheme possesses universal property in some remote medical treatment systems which are similar to the system.

  12. Finite-dimensional linear approximations of solutions to general irregular nonlinear operator equations and equations with quadratic operators

    NASA Astrophysics Data System (ADS)

    Kokurin, M. Yu.

    2010-11-01

    A general scheme for improving approximate solutions to irregular nonlinear operator equations in Hilbert spaces is proposed and analyzed in the presence of errors. A modification of this scheme designed for equations with quadratic operators is also examined. The technique of universal linear approximations of irregular equations is combined with the projection onto finite-dimensional subspaces of a special form. It is shown that, for finite-dimensional quadratic problems, the proposed scheme provides information about the global geometric properties of the intersections of quadrics.

  13. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  14. A Universal Ts-VI Triangle Method for the Continuous Retrieval of Evaporative Fraction From MODIS Products

    NASA Astrophysics Data System (ADS)

    Zhu, Wenbin; Jia, Shaofeng; Lv, Aifeng

    2017-10-01

    The triangle method based on the spatial relationship between remotely sensed land surface temperature (Ts) and vegetation index (VI) has been widely used for the estimates of evaporative fraction (EF). In the present study, a universal triangle method was proposed by transforming the Ts-VI feature space from a regional scale to a pixel scale. The retrieval of EF is only related to the boundary conditions at pixel scale, regardless of the Ts-VI configuration over the spatial domain. The boundary conditions of each pixel are composed of the theoretical dry edge determined by the surface energy balance principle and the wet edge determined by the average air temperature of open water. The universal triangle method was validated using the EF observations collected by the Energy Balance Bowen Ratio systems in the Southern Great Plains of the United States of America (USA). Two parameterization schemes of EF were used to demonstrate their applicability with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) products over the whole year 2004. The results of this study show that the accuracy produced by both of these two parameterization schemes is comparable to that produced by the traditional triangle method, although the universal triangle method seems specifically suited to the parameterization scheme proposed in our previous research. The independence of the universal triangle method from the Ts-VI feature space makes it possible to conduct a continuous monitoring of evapotranspiration and soil moisture. That is just the ability the traditional triangle method does not possess.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang Baolong; Department of Mathematics and Physics, Hefei University, Hefei 230022; Yang Zhen

    We propose a scheme for implementing a partial general quantum cloning machine with superconducting quantum-interference devices coupled to a nonresonant cavity. By regulating the time parameters, our system can perform optimal symmetric (asymmetric) universal quantum cloning, optimal symmetric (asymmetric) phase-covariant cloning, and optimal symmetric economical phase-covariant cloning. In the scheme the cavity is only virtually excited, thus, the cavity decay is suppressed during the cloning operations.

  16. A Simulation of an Income Contingent Tuition Scheme in a Transition Economy

    ERIC Educational Resources Information Center

    Vodopivec, Milan

    2009-01-01

    The paper takes advantage of exceptionally rich longitudinal data on the universe of labor force participants in Slovenia and simulates the working of an income contingent loan scheme that seeks to recover part of schooling costs. The simulations show that under the base variant (where the target cost recovery rate is 20% and the contribution rate…

  17. On the possible roles of microsaccades and drifts in visual perception.

    PubMed

    Ahissar, Ehud; Arieli, Amos; Fried, Moshe; Bonneh, Yoram

    2016-01-01

    During natural viewing large saccades shift the visual gaze from one target to another every few hundreds of milliseconds. The role of microsaccades (MSs), small saccades that show up during long fixations, is still debated. A major debate is whether MSs are used to redirect the visual gaze to a new location or to encode visual information through their movement. We argue that these two functions cannot be optimized simultaneously and present several pieces of evidence suggesting that MSs redirect the visual gaze and that the visual details are sampled and encoded by ocular drifts. We show that drift movements are indeed suitable for visual encoding. Yet, it is not clear to what extent drift movements are controlled by the visual system, and to what extent they interact with saccadic movements. We analyze several possible control schemes for saccadic and drift movements and propose experiments that can discriminate between them. We present the results of preliminary analyses of existing data as a sanity check to the testability of our predictions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Study of the OCDMA Transmission Characteristics in FSO-FTTH at Various Distances, Outdoor

    NASA Astrophysics Data System (ADS)

    Aldouri, Muthana Y.; Aljunid, S. A.; Fadhil, Hilal A.

    2013-06-01

    It is important to apply the field Programmable Gate Array (FPGA), and Optical Switch technology as an encoder and decoder for Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) Free Space Optic Fiber to the Home (FSO-FTTH) transmitter and receiver system design. The encoder and decoder module will be using FPGA as a code generator, optical switch using as encode and decode of optical source. This module was tested by using the Modified Double Weight (MDW) code, which is selected as an excellent candidate because it had shown superior performance were by the total noise is reduced. It is also easy to construct and can reduce the number of filters required at a receiver by a newly proposed detection scheme known as AND Subtraction technique. MDW code is presented here to support Fiber-To-The-Home (FTTH) access network in Point-To-Multi-Point (P2MP) application. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. The performances are characterized through BER and bit rate (BR), also, the received power at a variety of bit rates.

  19. Development of schemas revealed by prior experience and NMDA receptor knock-out

    PubMed Central

    Dragoi, George; Tonegawa, Susumu

    2013-01-01

    Prior experience accelerates acquisition of novel, related information through processes like assimilation into mental schemas, but the underlying neuronal mechanisms are poorly understood. We investigated the roles that prior experience and hippocampal CA3 N-Methyl-D-aspartate receptor (NMDAR)-dependent synaptic plasticity play in CA1 place cell sequence encoding and learning during novel spatial experiences. We found that specific representations of de novo experiences on linear environments were formed on a framework of pre configured network activity expressed in the preceding sleep and were rapidly, flexibly adjusted via NMDAR-dependent activity. This prior experience accelerated encoding of subsequent experiences on contiguous or isolated novel tracks, significantly decreasing their NMDAR-dependence. Similarly, de novo learning of an alternation task was facilitated by CA3 NMDARs; this experience accelerated subsequent learning of related tasks, independent of CA3 NMDARs, consistent with a schema-based learning. These results reveal the existence of distinct neuronal encoding schemes which could explain why hippocampal dysfunction results in anterograde amnesia while sparing recollection of old, schema-based memories. DOI: http://dx.doi.org/10.7554/eLife.01326.001 PMID:24327561

  20. High-speed time-reversed ultrasonically encoded (TRUE) optical focusing inside dynamic scattering media at 793 nm

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Lai, Puxiang; Ma, Cheng; Xu, Xiao; Suzuki, Yuta; Grabar, Alexander A.; Wang, Lihong V.

    2014-03-01

    Time-reversed ultrasonically encoded (TRUE) optical focusing is an emerging technique that focuses light deep into scattering media by phase-conjugating ultrasonically encoded diffuse light. In previous work, the speed of TRUE focusing was limited to no faster than 1 Hz by the response time of the photorefractive phase conjugate mirror, or the data acquisition and streaming speed of the digital camera; photorefractive-crystal-based TRUE focusing was also limited to the visible spectral range. These time-consuming schemes prevent this technique from being applied in vivo, since living biological tissue has a speckle decorrelation time on the order of a millisecond. In this work, using a Tedoped Sn2P2S6 photorefractive crystal at a near-infrared wavelength of 793 nm, we achieved TRUE focusing inside dynamic scattering media having a speckle decorrelation time as short as 7.7 ms. As the achieved speed approaches the tissue decorrelation rate, this work is an important step forward toward in vivo applications of TRUE focusing in deep tissue imaging, photodynamic therapy, and optical manipulation.

  1. Evaluation of WRF-Predicted Near-Hub-Height Winds and Ramp Events over a Pacific Northwest Site with Complex Terrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Qing; Berg, Larry K.; Pekour, Mikhail

    The WRF model version 3.3 is used to simulate near hub-height winds and power ramps utilizing three commonly used planetary boundary-layer (PBL) schemes: Mellor-Yamada-Janjic (MYJ), University of Washington (UW), and Yonsei University (YSU). The predicted winds have small mean biases compared with observations. Power ramps and step changes (changes within an hour) consistently show that the UW scheme performed better in predicting up ramps under stable conditions with higher prediction accuracy and capture rates. Both YSU and UW scheme show good performance predicting up- and down- ramps under unstable conditions with YSU being slightly better for ramp durations longer thanmore » an hour. MYJ is the most successful simulating down-ramps under stable conditions. The high wind speed and large shear associated with low-level jets are frequently associated with power ramps, and the biases in predicted low-level jet explain some of the shown differences in ramp predictions among different PBL schemes. Low-level jets were observed as low as ~200 m in altitude over the Columbia Basin Wind Energy Study (CBWES) site, located in an area of complex terrain. The shear, low-level peak wind speeds, as well as the height of maximum wind speed are not well predicted. Model simulations with 3 PBL schemes show the largest variability among them under stable conditions.« less

  2. Address Forms among University Students in Ghana: A Case of Gendered Identities?

    ERIC Educational Resources Information Center

    Afful, Joseph Benjamin Archibald

    2010-01-01

    In the last two decades, scholars in discourse studies and sociolinguistics have shown considerable interest in how identity is encoded in discourses across various facets of life such as academia, home, politics and workplace. By adopting an ethnographic-style approach, this study shows how students in a Ghanaian university construct their…

  3. An encoding readout method used for Multi-gap Resistive Plate Chambers (MRPCs) for muon tomography

    NASA Astrophysics Data System (ADS)

    Yue, X.; Zeng, M.; Wang, Y.; Wang, X.; Zeng, Z.; Zhao, Z.; Cheng, J.

    2014-09-01

    A muon tomography facility has been built in Tsinghua University. Because of the low flux of cosmic muon, an encoding readout method, based on the fine-fine configuration, was implemented for the 2880 channels induced signals from the Multi-gap Resistive Plate Chamber (MRPC) detectors. With the encoding method, the number of the readout electronics was dramatically reduced and thus the complexity and the cost of the facility was reduced, too. In this paper, the details of the encoding method, and the overall readout system setup in the muon tomography facility are described. With the commissioning of the facility, the readout method works well. The spatial resolution of all MRPC detectors are measured with cosmic muon and the preliminary imaging result are also given.

  4. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    NASA Astrophysics Data System (ADS)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  5. Empirical Analysis of Using Erasure Coding in Outsourcing Data Storage With Provable Security

    DTIC Science & Technology

    2016-06-01

    the fastest encoding performance among the four tested schemes. We expected to observe that Cauchy Reed-Solomonwould be faster than Reed- Solomon for all...providing recoverability for POR. We survey MDS codes and select Reed- Solomon and Cauchy Reed- Solomon MDS codes to be implemented into a prototype POR...tools providing recoverability for POR. We survey MDS codes and select Reed- Solomon and Cauchy Reed- Solomon MDS codes to be implemented into a

  6. System Detects Vibrational Instabilities

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr.

    1990-01-01

    Sustained vibrations at two critical frequencies trigger diagnostic response or shutdown. Vibration-analyzing electronic system detects instabilities of combustion in rocket engine. Controls pulse-mode firing of engine and identifies vibrations above threshold amplitude at 5.9 and/or 12kHz. Adapted to other detection and/or control schemes involving simultaneous real-time detection of signals above or below preset amplitudes at two or more specified frequencies. Potential applications include rotating machinery and encoders and decoders in security systems.

  7. Practical implementation of multilevel quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulik, S. P.; Maslennikov, G. A.; Moreva, E. V.

    2006-05-15

    The physical principles of a quantum key distribution protocol using four-level optical systems are discussed. Quantum information is encoded into polarization states created by frequency-nondegenerate spontaneous parametric down-conversion in collinear geometry. In the scheme under analysis, the required nonorthogonal states are generated in a single nonlinear crystal. All states in the selected basis are measured deterministically. The results of initial experiments on transformation of the basis polarization states of a four-level optical system are discussed.

  8. A neural coding scheme reproducing foraging trajectories

    PubMed Central

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  9. Scheme for Entering Binary Data Into a Quantum Computer

    NASA Technical Reports Server (NTRS)

    Williams, Colin

    2005-01-01

    A quantum algorithm provides for the encoding of an exponentially large number of classical data bits by use of a smaller (polynomially large) number of quantum bits (qubits). The development of this algorithm was prompted by the need, heretofore not satisfied, for a means of entering real-world binary data into a quantum computer. The data format provided by this algorithm is suitable for subsequent ultrafast quantum processing of the entered data. Potential applications lie in disciplines (e.g., genomics) in which one needs to search for matches between parts of very long sequences of data. For example, the algorithm could be used to encode the N-bit-long human genome in only log2N qubits. The resulting log2N-qubit state could then be used for subsequent quantum data processing - for example, to perform rapid comparisons of sequences.

  10. Dynamic Multiple-Threshold Call Admission Control Based on Optimized Genetic Algorithm in Wireless/Mobile Networks

    NASA Astrophysics Data System (ADS)

    Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin

    Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.

  11. Collusion-aware privacy-preserving range query in tiered wireless sensor networks.

    PubMed

    Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping

    2014-12-11

    Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromisedmaster nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals.

  12. Collusion-Aware Privacy-Preserving Range Query in Tiered Wireless Sensor Networks†

    PubMed Central

    Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping

    2014-01-01

    Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromised master nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals. PMID:25615731

  13. When and Why Do University Managers Use Publication Incentive Payments?

    ERIC Educational Resources Information Center

    Opstrup, Niels

    2017-01-01

    Pay-for-performance schemes have become a widespread management strategy in the public sector. However, not much is known about the rationales that trigger the adoption of performance-related pay provisions. This article examines managerial and organisational features of university departments in Denmark that use publication incentive payments.…

  14. Horse cDNA clones encoding two MHC class I genes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbis, D.P.; Maher, J.K.; Stanek, J.

    1994-12-31

    Two full-length clones encoding MHC class I genes were isolated by screening a horse cDNA library, using a probe encoding in human HLA-A2.2Y allele. The library was made in the pcDNA1 vector (Invitrogen, San Diego, CA), using mRNA from peripheral blood lymphocytes obtained from a Thoroughbred stallion (No. 0834) homozygous for a common horse MHC haplotype (ELA-A2, -B2, -D2; Antczak et al. 1984; Donaldson et al. 1988). The clones were sequenced, using SP6 and T7 universal primers and horse-specific oligonucleotides designed to extend previously determined sequences.

  15. Demultiplexing of photonic temporal modes by a linear system

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Shen, H. Z.; Yi, X. X.

    2018-03-01

    Temporally and spatially overlapping but field-orthogonal photonic temporal modes (TMs) that intrinsically span a high-dimensional Hilbert space are recently suggested as a promising means of encoding information on photons. Presently, the realization of photonic TM technology, particularly to retrieve the information it carries, i.e., demultiplexing of photonic TMs, is mostly dependent on nonlinear medium and frequency conversion. Meanwhile, its miniaturization, simplification, and optimization remain the focus of research. In this paper, we propose a scheme of TM demultiplexing using linear systems consisting of resonators with linear couplings. Specifically, we examine a unidirectional array of identical resonators with short environment correlations. For both situations with and without tunable couplers, propagation formulas are derived to demonstrate photonic TM demultiplexing capabilities. The proposed scheme, being entirely feasible with current technologies, might find potential applications in quantum information processing.

  16. Coding/modulation trade-offs for Shuttle wideband data links

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Huth, G. K.; Trumpis, B. D.

    1974-01-01

    This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.

  17. Digital communication with Rydberg atoms and amplitude-modulated microwave fields

    NASA Astrophysics Data System (ADS)

    Meyer, David H.; Cox, Kevin C.; Fatemi, Fredrik K.; Kunz, Paul D.

    2018-05-01

    Rydberg atoms, with one highly excited, nearly ionized electron, have extreme sensitivity to electric fields, including microwave fields ranging from 100 MHz to over 1 THz. Here, we show that room-temperature Rydberg atoms can be used as sensitive, high bandwidth, microwave communication antennas. We demonstrate near photon-shot-noise limited readout of data encoded in amplitude-modulated 17 GHz microwaves, using an electromagnetically induced-transparency (EIT) probing scheme. We measure a photon-shot-noise limited channel capacity of up to 8.2 Mbit s-1 and implement an 8-state phase-shift-keying digital communication protocol. The bandwidth of the EIT probing scheme is found to be limited by the available coupling laser power and the natural linewidth of the rubidium D2 transition. We discuss how atomic communication receivers offer several opportunities to surpass the capabilities of classical antennas.

  18. Deterministic error correction for nonlocal spatial-polarization hyperentanglement

    PubMed Central

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-01-01

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681

  19. Deterministic error correction for nonlocal spatial-polarization hyperentanglement.

    PubMed

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-02-10

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.

  20. How health economic evaluation (HEE) contributes to decision-making in public health care: the case of Brazil.

    PubMed

    Elias, Flávia Tavares Silva; Araújo, Denizar Vianna

    2014-01-01

    The universal access to a health care system for the Brazilian population was established in 1990. Brazil is a country with no tradition in the production and use of health economic evaluation (HEE) to guide decision making in the public health system. It is only within the last two decades that HEEs using a microeconomic approach have appeared in the academic field. On a national level, HEE and Health Technology Assessment (HTA), in a wider sense, were first taken into account in 2003. Two policies deserve to be mentioned - (i) the regulation of medicines in the Brazilian market, and (ii) science, technology and innovation policy. The latter required the fostering of applied research to encourage the application of methods which employ systematic reviews and economic analyses of cost-effectiveness to guide the incorporation of technologies in the Brazilian health care system. The Ministry of Health has initiated the process of incorporating these new technologies on a federal level during the last ten years. In spite of the improvement of HEE methods at Brazilian universities and research institutes, these technologies have not yet reached the governmental bodies. In Brazil, the main challenge lies in the production, interpretation and application of HEE to all technologies within the access scheme(s), and there is limited capacity building. Setting priorities can be the solution for Brazil to be able to perform HEE for relevant technologies within the access scheme(s) while the universal coverage system struggles with a triple burden of disease. Copyright © 2014. Published by Elsevier GmbH.

Top