Science.gov

Sample records for quantum computer architectures

  1. Layered Architectures for Quantum Computers and Quantum Repeaters

    NASA Astrophysics Data System (ADS)

    Jones, Nathan C.

    This chapter examines how to organize quantum computers and repeaters using a systematic framework known as layered architecture, where machine control is organized in layers associated with specialized tasks. The framework is flexible and could be used for analysis and comparison of quantum information systems. To demonstrate the design principles in practice, we develop architectures for quantum computers and quantum repeaters based on optically controlled quantum dots, showing how a myriad of technologies must operate synchronously to achieve fault-tolerance. Optical control makes information processing in this system very fast, scalable to large problem sizes, and extendable to quantum communication.

  2. Topological Code Architectures for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Cesare, Christopher Anthony

    This dissertation is concerned with quantum computation using many-body quantum systems encoded in topological codes. The interest in these topological systems has increased in recent years as devices in the lab begin to reach the fidelities required for performing arbitrarily long quantum algorithms. The most well-studied system, Kitaev's toric code, provides both a physical substrate for performing universal fault-tolerant quantum computations and a useful pedagogical tool for explaining the way other topological codes work. In this dissertation, I first review the necessary formalism for quantum information and quantum stabilizer codes, and then I introduce two families of topological codes: Kitaev's toric code and Bombin's color codes. I then present three chapters of original work. First, I explore the distinctness of encoding schemes in the color codes. Second, I introduce a model of quantum computation based on the toric code that uses adiabatic interpolations between static Hamiltonians with gaps constant in the system size. Lastly, I describe novel state distillation protocols that are naturally suited for topological architectures and show that they provide resource savings in terms of the number of required ancilla states when compared to more traditional approaches to quantum gate approximation.

  3. Analysis of an Atom-Optical Architecture for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Devitt, Simon J.; Stephens, Ashley M.; Munro, William J.; Nemoto, Kae

    Quantum technology based on photons has emerged as one of the most promising platforms for quantum information processing, having already been used in proof-of-principle demonstrations of quantum communication and quantum computation. However, the scalability of this technology depends on the successful integration of experimentally feasible devices in an architecture that tolerates realistic errors and imperfections. Here, we analyse an atom-optical architecture for quantum computation designed to meet the requirements of scalability. The architecture is based on a modular atom-cavity device that provides an effective photon-photon interaction, allowing for the rapid, deterministic preparation of a large class of entangled states. We begin our analysis at the physical level, where we outline the experimental cavity quantum electrodynamics requirements of the basic device. Then, we describe how a scalable network of these devices can be used to prepare a three-dimensional topological cluster state, sufficient for universal fault-tolerant quantum computation. We conclude at the application level, where we estimate the system-level requirements of the architecture executing an algorithm compiled for compatibility with the topological cluster state.

  4. Scalable quantum computer architecture with coupled donor-quantum dot qubits

    DOEpatents

    Schenkel, Thomas; Lo, Cheuk Chi; Weis, Christoph; Lyon, Stephen; Tyryshkin, Alexei; Bokor, Jeffrey

    2014-08-26

    A quantum bit computing architecture includes a plurality of single spin memory donor atoms embedded in a semiconductor layer, a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, wherein a first voltage applied across at least one pair of the aligned quantum dot and donor atom controls a donor-quantum dot coupling. A method of performing quantum computing in a scalable architecture quantum computing apparatus includes arranging a pattern of single spin memory donor atoms in a semiconductor layer, forming a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, applying a first voltage across at least one aligned pair of a quantum dot and donor atom to control a donor-quantum dot coupling, and applying a second voltage between one or more quantum dots to control a Heisenberg exchange J coupling between quantum dots and to cause transport of a single spin polarized electron between quantum dots.

  5. QNIX: A Linear Optical Architecture for Quantum Computing

    NASA Astrophysics Data System (ADS)

    Gimeno-Segovia, Mercedes; Shadbolt, Peter J.; Rudolph, Terry G.; Browne, Dan E.; Mendoza, Gabriel; Russell, Nicholas J.; Silverstone, Joshua W.; Santamato, Alberto; Carolan, Jacques; O'Brien, Jeremy

    2015-03-01

    There is currently a great deal of effort to develop a large-scale quantum computer, and one of the most promising platforms to do so is integrated linear optics. We present a proposal for a dynamical scheme for an integrated linear optics implementation of a one-way quantum computer. We go beyond the purely theoretical work and address practical issues in order to create a physically realistic design. We describe every step of cluster state construction and processing, showing the outstanding issues left to be addressed and our contributions to the different stages of the dynamical process. These include optimised interferometers for the generation of GHZ states, a universal and scalable architecture which requires entangled sources of no more than 3 photons with no active feed-forward, and loss-tolerant and fault-tolerant strategies specifically tailored to our proposed architecture. Our work demonstrates that building a linear optical quantum computer need be less challenging than previously thought, and brings large-scale switch-free linear optical architectures for quantum computing much closer to experimental realisation.

  6. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    PubMed

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. PMID:26878722

  7. Cryogenic Control Architecture for Large-Scale Quantum Computing

    NASA Astrophysics Data System (ADS)

    Hornibrook, J. M.; Colless, J. I.; Conway Lamb, I. D.; Pauka, S. J.; Lu, H.; Gossard, A. C.; Watson, J. D.; Gardner, G. C.; Fallahi, S.; Manfra, M. J.; Reilly, D. J.

    2015-02-01

    Solid-state qubits have recently advanced to the level that enables them, in principle, to be scaled up into fault-tolerant quantum computers. As these physical qubits continue to advance, meeting the challenge of realizing a quantum machine will also require the development of new supporting devices and control architectures with complexity far beyond the systems used in today's few-qubit experiments. Here, we report a microarchitecture for controlling and reading out qubits during the execution of a quantum algorithm such as an error-correcting code. We demonstrate the basic principles of this architecture using a cryogenic switch matrix implemented via high-electron-mobility transistors and a new kind of semiconductor device based on gate-switchable capacitance. The switch matrix is used to route microwave waveforms to qubits under the control of a field-programmable gate array, also operating at cryogenic temperatures. Taken together, these results suggest a viable approach for controlling large-scale quantum systems using semiconductor technology.

  8. An architecture for quantum computation with magnetically trapped Holmium atoms

    NASA Astrophysics Data System (ADS)

    Saffman, Mark; Hostetter, James; Booth, Donald; Collett, Jeffrey

    2016-05-01

    Outstanding challenges for scalable neutral atom quantum computation include correction of atom loss due to collisions with untrapped background gas, reduction of crosstalk during state preparation and measurement due to scattering of near resonant light, and the need to improve quantum gate fidelity. We present a scalable architecture based on loading single Holmium atoms into an array of Ioffe-Pritchard traps. The traps are formed by grids of superconducting wires giving a trap array with 40 μm period, suitable for entanglement via long range Rydberg gates. The states | F = 5 , M = 5 > and | F = 7 , M = 7 > provide a magic trapping condition at a low field of 3.5 G for long coherence time qubit encoding. The F = 11 level will be used for state preparation and measurement. The availability of different states for encoding, gate operations, and measurement, spectroscopically isolates the different operations and will prevent crosstalk to neighboring qubits. Operation in a cryogenic environment with ultra low pressure will increase atom lifetime and Rydberg gate fidelity by reduction of blackbody induced Rydberg decay. We will present a complete description of the architecture including estimates of achievable performance metrics. Work supported by NSF award PHY-1404357.

  9. Spin-qubit inspired architectures for superconducting quantum computing

    NASA Astrophysics Data System (ADS)

    Shim, Yun-Pil; Tahan, Charles

    2015-03-01

    In recent years, the superconducting qubit community has achieved single and two-qubit benchmarked gate fidelities approaching 99.9%, fast readout with novel superconducting amplifiers, distributed entanglement, and other milestones on the road to fault-tolerant quantum information processing. Obviously, this is a field that could use some help from the semiconductor qubit community! Here we present theoretical work on superconducting qubit systems inspired by our experience with semiconductor qubits. We discuss initialization, single- and two-qubit gate operations, and measurement schemes for an encoded qubit in a two-dimensional architecture. Our results motivate new ways of designing or operating superconducting quantum information processors.

  10. Quantum Computing

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias

    2013-03-01

    Quantum mechanics plays a crucial role in many day-to-day products, and has been successfully used to explain a wide variety of observations in Physics. While some quantum effects such as tunneling limit the degree to which modern CMOS devices can be scaled to ever reducing dimensions, others may potentially be exploited to build an entirely new computing architecture: The quantum computer. In this talk I will review several basic concepts of a quantum computer. Why quantum computing and how do we do it? What is the status of several (but not all) approaches towards building a quantum computer, including IBM's approach using superconducting qubits? And what will it take to build a functional machine? The promise is that a quantum computer could solve certain interesting computational problems such as factoring using exponentially fewer computational steps than classical systems. Although the most sophisticated modern quantum computing experiments to date do not outperform simple classical computations, it is increasingly becoming clear that small scale demonstrations with as many as 100 qubits are beginning to be within reach over the next several years. Such a demonstration would undoubtedly be a thrilling feat, and usher in a new era of controllably testing quantum mechanics or quantum computing aspects. At the minimum, future demonstrations will shed much light on what lies ahead.

  11. Architecture Framework for Trapped-Ion Quantum Computer based on Performance Simulation Tool

    NASA Astrophysics Data System (ADS)

    Ahsan, Muhammad

    The challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance. Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology

  12. Simulation of Si:P spin-based quantum computer architecture

    SciTech Connect

    Chang Yiachung; Fang Angbo

    2008-11-07

    We present realistic simulation for single and double phosphorous donors in a silicon-based quantum computer design by solving a valley-orbit coupled effective-mass equation for describing phosphorous donors in strained silicon quantum well (QW). Using a generalized unrestricted Hartree-Fock method, we solve the two-electron effective-mass equation with quantum well confinement and realistic gate potentials. The effects of QW width, gate voltages, donor separation, and donor position shift on the lowest singlet and triplet energies and their charge distributions for a neighboring donor pair in the quantum computer(QC) architecture are analyzed. The gate tunability are defined and evaluated for a typical QC design. Estimates are obtained for the duration of spin half-swap gate operation.

  13. Toward a scalable quantum computing architecture with mixed species ion chains

    NASA Astrophysics Data System (ADS)

    Wright, John; Auchter, Carolyn; Chou, Chen-Kuan; Graham, Richard D.; Noel, Thomas W.; Sakrejda, Tomasz; Zhou, Zichao; Blinov, Boris B.

    2016-01-01

    We report on progress toward implementing mixed ion species quantum information processing for a scalable ion-trap architecture. Mixed species chains may help solve several problems with scaling ion-trap quantum computation to large numbers of qubits. Initial temperature measurements of linear Coulomb crystals containing barium and ytterbium ions indicate that the mass difference does not significantly impede cooling at low ion numbers. Average motional occupation numbers are estimated to be bar{n} ≈ 130 quanta per mode for chains with small numbers of ions, which is within a factor of three of the Doppler limit for barium ions in our trap. We also discuss generation of ion-photon entanglement with barium ions with a fidelity of F ≥ 0.84 , which is an initial step towards remote ion-ion coupling in a more scalable quantum information architecture. Further, we are working to implement these techniques in surface traps in order to exercise greater control over ion chain ordering and positioning.

  14. Robust quantum gates and a bus architecture for quantum computing with rare-earth-ion-doped crystals

    SciTech Connect

    Wesenberg, Janus; Moelmer, Klaus

    2003-07-01

    We present a composite pulse controlled phase gate which, together with a bus architecture, improves the feasibility of a recent quantum computing proposal based on rare-earth-ion-doped crystals. The proposed gate operation is tolerant to variations between ions of coupling strengths, pulse lengths, and frequency shifts. In the absence of decoherence effects, it achieves worst case fidelities above 0.999 with relative variations in coupling strength as high as 10% and frequency shifts up to several percent of the resonant Rabi frequency of the laser used to implement the gate. We outline an experiment to demonstrate the creation and detection of maximally entangled states in the system.

  15. New computer architectures

    SciTech Connect

    Tiberghien, J.

    1984-01-01

    This book presents papers on supercomputers. Topics considered include decentralized computer architecture, new programming languages, data flow computers, reduction computers, parallel prefix calculations, structural and behavioral descriptions of digital systems, instruction sets, software generation, personal computing, and computer architecture education.

  16. Quantum Computation and Quantum Information

    NASA Astrophysics Data System (ADS)

    Nielsen, Michael A.; Chuang, Isaac L.

    2010-12-01

    Part I. Fundamental Concepts: 1. Introduction and overview; 2. Introduction to quantum mechanics; 3. Introduction to computer science; Part II. Quantum Computation: 4. Quantum circuits; 5. The quantum Fourier transform and its application; 6. Quantum search algorithms; 7. Quantum computers: physical realization; Part III. Quantum Information: 8. Quantum noise and quantum operations; 9. Distance measures for quantum information; 10. Quantum error-correction; 11. Entropy and information; 12. Quantum information theory; Appendices; References; Index.

  17. Physical Architecture for a Universal Topological Quantum Computer based on a Network of Majorana Nanowires

    NASA Astrophysics Data System (ADS)

    Sau, Jay; Barkeshli, Maissam

    The idea of topological quantum computation (TQC) is to encode and manipulate quantum information in an intrinsically fault-tolerant manner by utilizing the physics of topologically ordered phases of matter. Currently, the most promising platforms for a topological qubit are either in terms of Majorana fermion zero modes (MZMs) in spin-orbit coupled superconducting nanowires or in terms of the Kitaev Z2 surface code. However, the topologically robust operations that are possible in these systems are not sufficient for realizing a universal gate set for topological quantum computation. Here, we show that an array of coupled semiconductor/superconductor nanowires with MZM edge states can be used to realize a more sophisticated type of non-Abelian defect, a genon in an Ising X Ising topological state. This leads to a possible implementation of the missing topologically protected pi/8 phase gate and thus paves a path for universal topological quantum computation based on semiconductor-superconductor nanowire technology. We provide detailed numerical estimates of the relevant energy scales, which we show to lie within accessible ranges. J. S. was supported by Microsoft Station Q, startup funds from the University of Maryland and NSF-JQI-PFC.

  18. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  19. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA

  20. Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ekert, Artur

    1994-08-01

    As computers become faster they must become smaller because of the finiteness of the speed of light. The history of computer technology has involved a sequence of changes from one type of physical realisation to another - from gears to relays to valves to transistors to integrated circuits and so on. Quantum mechanics is already important in the design of microelectronic components. Soon it will be necessary to harness quantum mechanics rather than simply take it into account, and at that point it will be possible to give data processing devices new functionality.

  1. The Physics of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Falci, Giuseppe; Paladino, Elisabette

    2015-10-01

    Quantum Computation has emerged in the past decades as a consequence of down-scaling of electronic devices to the mesoscopic regime and of advances in the ability of controlling and measuring microscopic quantum systems. QC has many interdisciplinary aspects, ranging from physics and chemistry to mathematics and computer science. In these lecture notes we focus on physical hardware, present day challenges and future directions for design of quantum architectures.

  2. Quantum walk computation

    SciTech Connect

    Kendon, Viv

    2014-12-04

    Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer.

  3. Adiabatic Quantum Computing

    NASA Astrophysics Data System (ADS)

    Landahl, Andrew

    2012-10-01

    Quantum computers promise to exploit counterintuitive quantum physics principles like superposition, entanglement, and uncertainty to solve problems using fundamentally fewer steps than any conventional computer ever could. The mere possibility of such a device has sharpened our understanding of quantum coherent information, just as lasers did for our understanding of coherent light. The chief obstacle to developing quantum computer technology is decoherence--one of the fastest phenomena in all of physics. In principle, decoherence can be overcome by using clever entangled redundancies in a process called fault-tolerant quantum error correction. However, the quality and scale of technology required to realize this solution appears distant. An exciting alternative is a proposal called ``adiabatic'' quantum computing (AQC), in which adiabatic quantum physics keeps the computer in its lowest-energy configuration throughout its operation, rendering it immune to many decoherence sources. The Adiabatic Quantum Architectures In Ultracold Systems (AQUARIUS) Grand Challenge Project at Sandia seeks to demonstrate this robustness in the laboratory and point a path forward for future hardware development. We are building devices in AQUARIUS that realize the AQC architecture on up to three quantum bits (``qubits'') in two platforms: Cs atoms laser-cooled to below 5 microkelvin and Si quantum dots cryo-cooled to below 100 millikelvin. We are also expanding theoretical frontiers by developing methods for scalable universal AQC in these platforms. We have successfully demonstrated operational qubits in both platforms and have even run modest one-qubit calculations using our Cs device. In the course of reaching our primary proof-of-principle demonstrations, we have developed multiple spinoff technologies including nanofabricated diffractive optical elements that define optical-tweezer trap arrays and atomic-scale Si lithography commensurate with placing individual donor atoms with

  4. Quantum Computer Games: Quantum Minesweeper

    ERIC Educational Resources Information Center

    Gordon, Michal; Gordon, Goren

    2010-01-01

    The computer game of quantum minesweeper is introduced as a quantum extension of the well-known classical minesweeper. Its main objective is to teach the unique concepts of quantum mechanics in a fun way. Quantum minesweeper demonstrates the effects of superposition, entanglement and their non-local characteristics. While in the classical…

  5. Quantum robots and quantum computers

    SciTech Connect

    Benioff, P.

    1998-07-01

    Validation of a presumably universal theory, such as quantum mechanics, requires a quantum mechanical description of systems that carry out theoretical calculations and systems that carry out experiments. The description of quantum computers is under active development. No description of systems to carry out experiments has been given. A small step in this direction is taken here by giving a description of quantum robots as mobile systems with on board quantum computers that interact with different environments. Some properties of these systems are discussed. A specific model based on the literature descriptions of quantum Turing machines is presented.

  6. Recursive computer architecture for VLSI

    SciTech Connect

    Treleaven, P.C.; Hopkins, R.P.

    1982-01-01

    A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.

  7. Introduction to Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ekert, A.

    A computation is a physical process. It may be performed by a piece of electronics or on an abacus, or in your brain, but it is a process that takes place in nature and as such it is subject to the laws of physics. Quantum computers are machines that rely on characteristically quantum phenomena, such as quantum interference and quantum entanglement in order to perform computation. In this series of lectures I want to elaborate on the computational power of such machines.

  8. Computing architecture for autonomous microgrids

    SciTech Connect

    Goldsmith, Steven Y.

    2015-09-29

    A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

  9. Scalable optical quantum computer

    SciTech Connect

    Manykin, E A; Mel'nichenko, E V

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  10. Universal computation by multiparticle quantum walk.

    PubMed

    Childs, Andrew M; Gosset, David; Webb, Zak

    2013-02-15

    A quantum walk is a time-homogeneous quantum-mechanical process on a graph defined by analogy to classical random walk. The quantum walker is a particle that moves from a given vertex to adjacent vertices in quantum superposition. We consider a generalization to interacting systems with more than one walker, such as the Bose-Hubbard model and systems of fermions or distinguishable particles with nearest-neighbor interactions, and show that multiparticle quantum walk is capable of universal quantum computation. Our construction could, in principle, be used as an architecture for building a scalable quantum computer with no need for time-dependent control. PMID:23413349

  11. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  12. Quantum computer games: quantum minesweeper

    NASA Astrophysics Data System (ADS)

    Gordon, Michal; Gordon, Goren

    2010-07-01

    The computer game of quantum minesweeper is introduced as a quantum extension of the well-known classical minesweeper. Its main objective is to teach the unique concepts of quantum mechanics in a fun way. Quantum minesweeper demonstrates the effects of superposition, entanglement and their non-local characteristics. While in the classical minesweeper the goal of the game is to discover all the mines laid out on a board without triggering them, in the quantum version there are several classical boards in superposition. The goal is to know the exact quantum state, i.e. the precise layout of all the mines in all the superposed classical boards. The player can perform three types of measurement: a classical measurement that probabilistically collapses the superposition; a quantum interaction-free measurement that can detect a mine without triggering it; and an entanglement measurement that provides non-local information. The application of the concepts taught by quantum minesweeper to one-way quantum computing are also presented.

  13. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  14. Adiabatic topological quantum computing

    NASA Astrophysics Data System (ADS)

    Cesare, Chris; Landahl, Andrew J.; Bacon, Dave; Flammia, Steven T.; Neels, Alice

    2015-07-01

    Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev's surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computation size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.

  15. Quantum information and computation

    SciTech Connect

    Bennett, C.H.

    1995-10-01

    A new quantum theory of communication and computation is emerging, in which the stuff transmitted or processed is not classical information, but arbitrary superpositions of quantum states. {copyright} 1995 {ital American} {ital Institute} {ital of} {ital Physics}.

  16. Quantum Computing since Democritus

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott

    2013-03-01

    1. Atoms and the void; 2. Sets; 3. Gödel, Turing, and friends; 4. Minds and machines; 5. Paleocomplexity; 6. P, NP, and friends; 7. Randomness; 8. Crypto; 9. Quantum; 10. Quantum computing; 11. Penrose; 12. Decoherence and hidden variables; 13. Proofs; 14. How big are quantum states?; 15. Skepticism of quantum computing; 16. Learning; 17. Interactive proofs and more; 18. Fun with the Anthropic Principle; 19. Free will; 20. Time travel; 21. Cosmology and complexity; 22. Ask me anything.

  17. Photonic Quantum Computing

    NASA Astrophysics Data System (ADS)

    Barz, Stefanie

    2013-05-01

    Quantum physics has revolutionized our understanding of information processing and enables computational speed-ups that are unattainable using classical computers. In this talk I will present a series of experiments in the field of photonic quantum computing. The first experiment is in the field of photonic state engineering and realizes the generation of heralded polarization-entangled photon pairs. It overcomes the limited applicability of photon-based schemes for quantum information processing tasks, which arises from the probabilistic nature of photon generation. The second experiment uses polarization-entangled photonic qubits to implement ``blind quantum computing,'' a new concept in quantum computing. Blind quantum computing enables a nearly-classical client to access the resources of a more computationally-powerful quantum server without divulging the content of the requested computation. Finally, the concept of blind quantum computing is applied to the field of verification. A new method is developed and experimentally demonstrated, which verifies the entangling capabilities of a quantum computer based on a blind Bell test.

  18. Savannah River Site computing architecture

    SciTech Connect

    Not Available

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site`s production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  19. Savannah River Site computing architecture

    SciTech Connect

    Not Available

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site's production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  20. Dissipative quantum computing with open quantum walks

    SciTech Connect

    Sinayskiy, Ilya; Petruccione, Francesco

    2014-12-04

    An open quantum walk approach to the implementation of a dissipative quantum computing scheme is presented. The formalism is demonstrated for the example of an open quantum walk implementation of a 3 qubit quantum circuit consisting of 10 gates.

  1. Probabilistic Cloning and Quantum Computation

    NASA Astrophysics Data System (ADS)

    Gao, Ting; Yan, Feng-Li; Wang, Zhi-Xi

    2004-06-01

    We discuss the usefulness of quantum cloning and present examples of quantum computation tasks for which the cloning offers an advantage which cannot be matched by any approach that does not resort to quantum cloning. In these quantum computations, we need to distribute quantum information contained in the states about which we have some partial information. To perform quantum computations, we use a state-dependent probabilistic quantum cloning procedure to distribute quantum information in the middle of a quantum computation.

  2. Reliability/redundancy trade-off evaluation for multiplexed architectures used to implement quantum dot based computing

    SciTech Connect

    Bhaduri, D.; Shukla, S. K.; Graham, P. S.; Gokhale, M.

    2004-01-01

    With the advent of nanocomputing, researchers have proposed Quantum Dot Cellular Automata (QCA) as one of the implementation technologies. The majority gate is one of the fundamental gates implementable with QCAs. Moreover, majority gates play an important role in defect-tolerant circuit implementations for nanotechnologies due to their use in redundancy mechanisms such as TMR, CTMR etc. Therefore, providing reliable implementation of majority logic using some redundancy mechanism is extremely important. This problem was addressed by von Neumann in 1956, in the form of 'majority multiplexing' and since then several analytical probabilistic models have been proposed to analyze majority multiplexing circuits. However, such analytical approaches are extremely challenging combinatorially and error prone. Also the previous analyses did not distinguish between permanent faults at the gates and transient faults due to noisy interconnects or noise effects on gates. In this paper, we provide explicit fault models for transient and permanent errors at the gates and noise effects at the interconnects. We model majority multiplexing in a probabilistic system description language, and use probabilistic model checking to analyze the effects of our fault models on the different reliability/redundancy trade-offs for majority multiplexing configurations. We also draw parallels with another fundamental logic gate multiplexing technique, namely NAND multiplexing. Tools and methodologies for analyzing redundant architectures that use majority gates will help logic designers to quickly evaluate the amount of redundancy needed to achieve a given level of reliability. VLSI designs at the nanoscale will utilize implementation fabrics prone to faults of permanent and transient nature, and the interconnects will be extensively affected by noise, hence the need for tools that can capture probabilistically quantified fault models and provide quick evaluation of the trade-offs. A comparative

  3. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  4. Quantum Analog Computing

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1998-01-01

    Quantum analog computing is based upon similarity between mathematical formalism of quantum mechanics and phenomena to be computed. It exploits a dynamical convergence of several competing phenomena to an attractor which can represent an externum of a function, an image, a solution to a system of ODE, or a stochastic process.

  5. VLSI Architectures for Computing DFT's

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  6. Disciplines, models, and computers: the path to computational quantum chemistry.

    PubMed

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990. PMID:25571750

  7. Quantum computing with trapped ions

    SciTech Connect

    Hughes, R.J.

    1998-01-01

    The significance of quantum computation for cryptography is discussed. Following a brief survey of the requirements for quantum computational hardware, an overview of the ion trap quantum computation project at Los Alamos is presented. The physical limitations to quantum computation with trapped ions are analyzed and an assessment of the computational potential of the technology is made.

  8. Quantum computation: Honesty test

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2013-11-01

    Alice does not have a quantum computer so she delegates a computation to Bob, who does own one. But how can Alice check whether the computation that Bob performs for her is correct? An experiment with photonic qubits demonstrates such a verification protocol.

  9. Entanglement and adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Ahrensmeier, D.

    2006-06-01

    Adiabatic quantum computation provides an alternative approach to quantum computation using a time-dependent Hamiltonian. The time evolution of entanglement during the adiabatic quantum search algorithm is studied, and its relevance as a resource is discussed.

  10. Optimal architectures for long distance quantum communication

    PubMed Central

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D.; Jiang, Liang

    2016-01-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances. PMID:26876670

  11. Optimal architectures for long distance quantum communication.

    PubMed

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D; Jiang, Liang

    2016-01-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥ 1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances. PMID:26876670

  12. Optimal architectures for long distance quantum communication

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D.; Jiang, Liang

    2016-02-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances.

  13. Verifiable Quantum Computing

    NASA Astrophysics Data System (ADS)

    Kashefi, Elham

    Over the next five to ten years we will see a state of flux as quantum devices become part of the mainstream computing landscape. However adopting and applying such a highly variable and novel technology is both costly and risky as this quantum approach has an acute verification and validation problem: On the one hand, since classical computations cannot scale up to the computational power of quantum mechanics, verifying the correctness of a quantum-mediated computation is challenging; on the other hand, the underlying quantum structure resists classical certification analysis. Our grand aim is to settle these key milestones to make the translation from theory to practice possible. Currently the most efficient ways to verify a quantum computation is to employ cryptographic methods. I will present the current state of the art of various existing protocols where generally there exists a trade-off between the practicality of the scheme versus their generality, trust assumptions and security level. EK gratefully acknowledges funding through EPSRC Grants EP/N003829/1 and EP/M013243/1.

  14. One-way quantum computation with circuit quantum electrodynamics

    SciTech Connect

    Wu Chunwang; Han Yang; Chen Pingxing; Li Chengzu; Zhong Xiaojun

    2010-03-15

    In this Brief Report, we propose a potential scheme to implement one-way quantum computation with circuit quantum electrodynamics (QED). Large cluster states of charge qubits can be generated in just one step with a superconducting transmission line resonator (TLR) playing the role of a dispersive coupler. A single-qubit measurement in the arbitrary basis can be implemented using a single electron transistor with the help of one-qubit gates. By examining the main decoherence sources, we show that circuit QED is a promising architecture for one-way quantum computation.

  15. Quantum Computing using Photons

    NASA Astrophysics Data System (ADS)

    Elhalawany, Ahmed; Leuenberger, Michael

    2013-03-01

    In this work, we propose a theoretical model of two-quantum bit gates for quantum computation using the polarization states of two photons in a microcavity. By letting the two photons interact non-resonantly with four quantum dots inside the cavity, we obtain an effective photon-photon interaction which we exploit for the implementation of an universal XOR gate. The two-photon Hamiltonian is written in terms of the photons' total angular momentum operators and their states are written using the Schwinger representation of the total angular momentum.

  16. Computational quantum chemistry website

    SciTech Connect

    1997-08-22

    This report contains the contents of a web page related to research on the development of quantum chemistry methods for computational thermochemistry and the application of quantum chemistry methods to problems in material chemistry and chemical sciences. Research programs highlighted include: Gaussian-2 theory; Density functional theory; Molecular sieve materials; Diamond thin-film growth from buckyball precursors; Electronic structure calculations on lithium polymer electrolytes; Long-distance electronic coupling in donor/acceptor molecules; and Computational studies of NOx reactions in radioactive waste storage.

  17. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  18. Cavity-based architecture to preserve quantum coherence and entanglement

    PubMed Central

    Man, Zhong-Xiao; Xia, Yun-Jie; Lo Franco, Rosario

    2015-01-01

    Quantum technology relies on the utilization of resources, like quantum coherence and entanglement, which allow quantum information and computation processing. This achievement is however jeopardized by the detrimental effects of the environment surrounding any quantum system, so that finding strategies to protect quantum resources is essential. Non-Markovian and structured environments are useful tools to this aim. Here we show how a simple environmental architecture made of two coupled lossy cavities enables a switch between Markovian and non-Markovian regimes for the dynamics of a qubit embedded in one of the cavity. Furthermore, qubit coherence can be indefinitely preserved if the cavity without qubit is perfect. We then focus on entanglement control of two independent qubits locally subject to such an engineered environment and discuss its feasibility in the framework of circuit quantum electrodynamics. With up-to-date experimental parameters, we show that our architecture allows entanglement lifetimes orders of magnitude longer than the spontaneous lifetime without local cavity couplings. This cavity-based architecture is straightforwardly extendable to many qubits for scalability. PMID:26351004

  19. Cavity-based architecture to preserve quantum coherence and entanglement.

    PubMed

    Man, Zhong-Xiao; Xia, Yun-Jie; Lo Franco, Rosario

    2015-01-01

    Quantum technology relies on the utilization of resources, like quantum coherence and entanglement, which allow quantum information and computation processing. This achievement is however jeopardized by the detrimental effects of the environment surrounding any quantum system, so that finding strategies to protect quantum resources is essential. Non-Markovian and structured environments are useful tools to this aim. Here we show how a simple environmental architecture made of two coupled lossy cavities enables a switch between Markovian and non-Markovian regimes for the dynamics of a qubit embedded in one of the cavity. Furthermore, qubit coherence can be indefinitely preserved if the cavity without qubit is perfect. We then focus on entanglement control of two independent qubits locally subject to such an engineered environment and discuss its feasibility in the framework of circuit quantum electrodynamics. With up-to-date experimental parameters, we show that our architecture allows entanglement lifetimes orders of magnitude longer than the spontaneous lifetime without local cavity couplings. This cavity-based architecture is straightforwardly extendable to many qubits for scalability. PMID:26351004

  20. Undergraduate computational physics projects on quantum computing

    NASA Astrophysics Data System (ADS)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  1. Hybrid quantum computing: semicloning for general database retrieval

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco; Uhlmann, Jeffrey K.

    2005-05-01

    Quantum computing (QC) has become an important area of research in computer science because of its potential to provide more efficient algorithmic solutions to certain problems than are possible with classical computing (CC). In particular, QC is able to exploit the special properties of quantum superposition to achieve computational parallelism beyond what can be achieved with parallel CC computers. However, these special properties are not applicable for general computation. Therefore, we propose the use of "hybrid quantum computers" (HQCs) that combine both classical and quantum computing architectures in order to leverage the benefits of both. We demonstrate how an HQC can exploit quantum search to support general database operations more efficiently than is possible with CC. Our solution is based on new quantum results that are of independent significance to the field of quantum computing. More specifically, we demonstrate that the most restrictive implications of the quantum No-Cloning Theorem can be avoided through the use of semiclones.

  2. Quantum architecture of novel solids

    NASA Astrophysics Data System (ADS)

    Zunger, A.

    2001-01-01

    The current status of our understanding of Quantum Mechanics is that if one specifies the chemical formula of a compound (e.g., CuAu, or GaAs, or NiPt) it is still impossible to predict if this material is a superconductor or not, but it is now possible to predict its crystal structure. This is a nontrivial accomplishment for there are as many as 2N possible structures for a binary compound. This article reviews this classic question of structural chemistry and condensed matter physics: How can one figure out which of the astronomic number of possible crystal structures is selected by Nature?

  3. Quantum computers: Definition and implementations

    SciTech Connect

    Perez-Delgado, Carlos A.; Kok, Pieter

    2011-01-15

    The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

  4. Quantum computers: Definition and implementations

    NASA Astrophysics Data System (ADS)

    Pérez-Delgado, Carlos A.; Kok, Pieter

    2011-01-01

    The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

  5. Brain architecture: a design for natural computation.

    PubMed

    Kaiser, Marcus

    2007-12-15

    Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture. PMID:17855223

  6. Quantum-cellular-automata quantum computing with endohedral fullerenes

    NASA Astrophysics Data System (ADS)

    Twamley, J.

    2003-05-01

    We present a scheme to perform universal quantum computation using global addressing techniques as applied to a physical system of endohedrally doped fullerenes. The system consists of an ABAB linear array of group-V endohedrally doped fullerenes. Each molecule spin site consists of a nuclear spin coupled via a hyperfine interaction to an electron spin. The electron spin of each molecule is in a quartet ground state S=3/2. Neighboring molecular electron spins are coupled via a magnetic dipole interaction. We find that an all-electron construction of a quantum cellular automaton is frustrated due to the degeneracy of the electronic transitions. However, we can construct a quantum-cellular-automata quantum computing architecture using these molecules by encoding the quantum information on the nuclear spins while using the electron spins as a local bus. We deduce the NMR and ESR pulses required to execute the basic cellular automaton operation and obtain a rough figure of merit for the number of gate operations per decoherence time. We find that this figure of merit compares well with other physical quantum computer proposals. We argue that the proposed architecture meets well the first four DiVincenzo criteria and we outline various routes toward meeting the fifth criterion: qubit readout.

  7. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  8. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  9. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    SciTech Connect

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  10. Quantum computing on encrypted data

    NASA Astrophysics Data System (ADS)

    Fisher, K. A. G.; Broadbent, A.; Shalm, L. K.; Yan, Z.; Lavoie, J.; Prevedel, R.; Jennewein, T.; Resch, K. J.

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  11. Quantum computing on encrypted data.

    PubMed

    Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems. PMID:24445949

  12. Quantum Computing's Classical Problem, Classical Computing's Quantum Problem

    NASA Astrophysics Data System (ADS)

    Van Meter, Rodney

    2014-08-01

    Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classical computers can already do. At the same time, those classical computers continue to advance, but those advances are now constrained by thermodynamics, and will soon be limited by the discrete nature of atomic matter and ultimately quantum effects. Technological advances benefit both quantum and classical machinery, altering the competitive landscape. Can we build quantum computing systems that out-compute classical systems capable of some logic gates per month? This article will discuss the interplay in these competing and cooperating technological trends.

  13. Quantum computing with defects.

    PubMed

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors. PMID:20404195

  14. Silicon enhancement mode nanostructures for quantum computing.

    SciTech Connect

    Carroll, Malcolm S.

    2010-03-01

    Development of silicon, enhancement mode nanostructures for solid-state quantum computing will be described. A primary motivation of this research is the recent unprecedented manipulation of single electron spins in GaAs quantum dots, which has been used to demonstrate a quantum bit. Long spin decoherence times are predicted possible in silicon qubits. This talk will focus on silicon enhancement mode quantum dot structures that emulate the GaAs lateral quantum dot qubit but use an enhancement mode field effect transistor (FET) structure. One critical concern for silicon quantum dots that use oxides as insulators in the FET structure is that defects in the metal oxide semiconductor (MOS) stack can produce both detrimental electrostatic and paramagnetic effects on the qubit. Understanding the implications of defects in the Si MOS system is also relevant for other qubit architectures that have nearby dielectric passivated surfaces. Stable, lithographically defined, single-period Coulomb-blockade and single-electron charge sensing in a quantum dot nanostructure using a MOS stack will be presented. A combination of characterization of defects, modeling and consideration of modified approaches that incorporate SiGe or donors provides guidance about the enhancement mode MOS approach for future qubits and quantum circuit micro-architecture.

  15. Teaching Computer Aided Architectural Design at UCLA.

    ERIC Educational Resources Information Center

    Mitchell, William J.

    This brief overview includes a rationale for the program and describes course goals and objectives, curriculum content, teaching methods and materials, staffing, and problems of integrating computer aided design with traditional architectural curricula at the School of Architecture and Urban Planning at UCLA. A list of texts for use in teaching…

  16. Semiconductor-inspired superconducting quantum computing

    NASA Astrophysics Data System (ADS)

    Shim, Yun-Pil

    Superconducting circuits offer tremendous design flexibility in the quantum regime culminating most recently in the demonstration of few qubit systems supposedly approaching the threshold for fault-tolerant quantum information processing. Competition in the solid-state comes from semiconductor qubits, where nature has bestowed some very useful properties which can be utilized for spin qubit based quantum computing. Here we present an architecture for superconducting quantum computing based on selective design principles deduced from spin-based systems. We propose an encoded qubit approach realizable with state-of-the-art tunable Josephson junction qubits. Our results show that this design philosophy holds promise, enables microwave-free control, and offers a pathway to future qubit designs with new capabilities such as with higher fidelity or, perhaps, operation at higher temperature. The approach is especially suited to qubits based on variable super-semi junctions.

  17. Universal quantum computation by discontinuous quantum walk

    SciTech Connect

    Underwood, Michael S.; Feder, David L.

    2010-10-15

    Quantum walks are the quantum-mechanical analog of random walks, in which a quantum ''walker'' evolves between initial and final states by traversing the edges of a graph, either in discrete steps from node to node or via continuous evolution under the Hamiltonian furnished by the adjacency matrix of the graph. We present a hybrid scheme for universal quantum computation in which a quantum walker takes discrete steps of continuous evolution. This ''discontinuous'' quantum walk employs perfect quantum-state transfer between two nodes of specific subgraphs chosen to implement a universal gate set, thereby ensuring unitary evolution without requiring the introduction of an ancillary coin space. The run time is linear in the number of simulated qubits and gates. The scheme allows multiple runs of the algorithm to be executed almost simultaneously by starting walkers one time step apart.

  18. A scalable quantum architecture using efficient non-local gates

    NASA Astrophysics Data System (ADS)

    Brennen, Gavin

    2003-03-01

    Many protocols for quantum information processing use a control sequence or circuit of interactions between qubits and control fields wherein arbitrary qubits can be made to interact with one another. The primary problem with many ``physically scalable" architectures is that the qubits are restricted to nearest neighbor interactions and quantum wires between distant qubits do not exist. Because of errors, nearest neighbor interactions often present difficulty with scalability. We describe a protocol that efficiently performs non-local gates between elements of separated static logical qubits using a bus of dynamic qubits as a refreshable entanglement resource. Imperfect resource preparation due to error propagation from noisy gates and measurement errors can purified within the bus channel. Because of the inherent parallelism of entanglement swapping, communication latency within the quantum computer can be significantly reduced.

  19. Communication Capacity of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Bose, S.; Rallan, L.; Vedral, V.

    2000-12-01

    By considering quantum computation as a communication process, we relate its efficiency to its classical communication capacity. This formalism allows us to derive lower bounds on the complexity of search algorithms in the most general context. It enables us to link the mixedness of a quantum computer to its efficiency and also allows us to derive the critical level of mixedness beyond which there is no quantum advantage in computation.

  20. Algorithms Bridging Quantum Computation and Chemistry

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod Ryan

    The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use

  1. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  2. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  3. Towards Quantum Computing With Light

    NASA Astrophysics Data System (ADS)

    Pysher, Matthew

    This thesis presents experimental progress towards the realization of an optical quantum computer. Quantum computers replace the bits used in classical computing with quantum systems and promise an exponential speedup over their classical counterparts for certain tasks such as integer factoring and the simulation of quantum systems. A recently proposed quantum computing protocol known as one-way quantum computing has paved the way for the use of light in a functional quantum computer. One-way quantum computing calls for the generation of a large (consisting of many subsystems) entangled state known as a cluster state to serve as a quantum register. Entangled states are comprised of subsystems linked in such a way that the state cannot be separated into individual components. A recent proposal has shown that is possible to make arbitrarily large cluster states by linking the resonant frequency modes of a single optical parametric oscillator (OPO). In this thesis, we present two major steps towards the creation of such a cluster state. Namely, we successfully design and test the exotic nonlinear crystal needed in this proposal and use a slight variation on this proposal to simultaneously create over 15 four-mode cluster states in a single OPO. We also explore the possibility of scaling down the physical size of an optical quantum computer by generating squeezed states of light in a compact optical waveguide. Additionally, we investigate photon-number-resolving measurements on continuous quantum light sources, which will be necessary to obtain the desired speedups for a quantum computer over a classical computer.

  4. Quantum Nash Equilibria and Quantum Computing

    NASA Astrophysics Data System (ADS)

    Fellman, Philip Vos; Post, Jonathan Vos

    In 2004, At the Fifth International Conference on Complex Systems, we drew attention to some remarkable findings by researchers at the Santa Fe Institute (Sato, Farmer and Akiyama, 2001) about hitherto unsuspected complexity in the Nash Equilibrium. As we progressed from these findings about heteroclinic Hamiltonians and chaotic transients hidden within the learning patterns of the simple rock-paper-scissors game to some related findings on the theory of quantum computing, one of the arguments we put forward was just as in the late 1990's a number of new Nash equilibria were discovered in simple bi-matrix games (Shubik and Quint, 1996; Von Stengel, 1997, 2000; and McLennan and Park, 1999) we would begin to see new Nash equilibria discovered as the result of quantum computation. While actual quantum computers remain rather primitive (Toibman, 2004), and the theory of quantum computation seems to be advancing perhaps a bit more slowly than originally expected, there have, nonetheless, been a number of advances in computation and some more radical advances in an allied field, quantum game theory (Huberman and Hogg, 2004) which are quite significant. In the course of this paper we will review a few of these discoveries and illustrate some of the characteristics of these new "Quantum Nash Equilibria". The full text of this research can be found at http://necsi.org/events/iccs6/viewpaper.php?id-234

  5. Quantum computing with defects

    NASA Astrophysics Data System (ADS)

    Varley, Joel

    2011-03-01

    The development of a quantum computer is contingent upon the identification and design of systems for use as qubits, the basic units of quantum information. One of the most promising candidates consists of a defect in diamond known as the nitrogen-vacancy (NV-1) center, since it is an individually-addressable quantum system that can be initialized, manipulated, and measured with high fidelity at room temperature. While the success of the NV-1 stems from its nature as a localized ``deep-center'' point defect, no systematic effort has been made to identify other defects that might behave in a similar way. We provide guidelines for identifying other defect centers with similar properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate systems. To elucidate these points, we compare electronic structure calculations of the NV-1 center in diamond with those of several deep centers in 4H silicon carbide (SiC). Using hybrid functionals, we report formation energies, configuration-coordinate diagrams, and defect-level diagrams to compare and contrast the properties of these defects. We find that the NC VSi - 1 center in SiC, a structural analog of the NV-1 center in diamond, may be a suitable center with very different optical transition energies. We also discuss how the proposed criteria can be translated into guidelines to discover NV analogs in other tetrahedrally coordinated materials. This work was performed in collaboration with J. R. Weber, W. F. Koehl, B. B. Buckley, A. Janotti, C. G. Van de Walle, and D. D. Awschalom. This work was supported by ARO, AFOSR, and NSF.

  6. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David [IBM Watson Research Center

    2009-09-01

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  7. Quantum Computing: Solving Complex Problems

    SciTech Connect

    DiVincenzo, David

    2007-04-12

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  8. Quantum Computing: Solving Complex Problems

    SciTech Connect

    DiVincenzo, David

    2007-04-11

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  9. THE COMPUTER AND THE ARCHITECTURAL PROFESSION.

    ERIC Educational Resources Information Center

    HAVILAND, DAVID S.

    THE ROLE OF ADVANCING TECHNOLOGY IN THE FIELD OF ARCHITECTURE IS DISCUSSED IN THIS REPORT. PROBLEMS IN COMMUNICATION AND THE DESIGN PROCESS ARE IDENTIFIED. ADVANTAGES AND DISADVANTAGES OF COMPUTERS ARE MENTIONED IN RELATION TO MAN AND MACHINE INTERACTION. PRESENT AND FUTURE IMPLICATIONS OF COMPUTER USAGE ARE IDENTIFIED AND DISCUSSED WITH RESPECT…

  10. Switching from Computer to Microcomputer Architecture Education

    ERIC Educational Resources Information Center

    Bolanakis, Dimosthenis E.; Kotsis, Konstantinos T.; Laopoulos, Theodore

    2010-01-01

    In the last decades, the technological and scientific evolution of the computing discipline has been widely affecting research in software engineering education, which nowadays advocates more enlightened and liberal ideas. This article reviews cross-disciplinary research on a computer architecture class in consideration of its switching to…

  11. Efficient universal blind quantum computation.

    PubMed

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G

    2013-12-01

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation. PMID:24476238

  12. Efficient Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.

    2013-12-01

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog⁡2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  13. Duality quantum computer and the efficient quantum simulations

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Long, Gui-Lu

    2016-03-01

    Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.

  14. Quantum computing accelerator I/O : LDRD 52750 final report.

    SciTech Connect

    Schroeppel, Richard Crabtree; Modine, Normand Arthur; Ganti, Anand; Pierson, Lyndon George; Tigges, Christopher P.

    2003-12-01

    In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be proposed as a result of this work.

  15. Toward a superconducting quantum computer

    PubMed Central

    Tsai, Jaw-Shen

    2010-01-01

    Intensive research on the construction of superconducting quantum computers has produced numerous important achievements. The quantum bit (qubit), based on the Josephson junction, is at the heart of this research. This macroscopic system has the ability to control quantum coherence. This article reviews the current state of quantum computing as well as its history, and discusses its future. Although progress has been rapid, the field remains beset with unsolved issues, and there are still many new research opportunities open to physicists and engineers. PMID:20431256

  16. Quantum Information and Computing

    NASA Astrophysics Data System (ADS)

    Accardi, L.; Ohya, Masanori; Watanabe, N.

    2006-03-01

    Preface -- Coherent quantum control of [symbol]-atoms through the stochastic limit / L. Accardi, S. V. Kozyrev and A. N. Pechen -- Recent advances in quantum white noise calculus / L. Accardi and A. Boukas -- Control of quantum states by decoherence / L. Accardi and K. Imafuku -- Logical operations realized on the Ising chain of N qubits / M. Asano, N. Tateda and C. Ishii -- Joint extension of states of fermion subsystems / H. Araki -- Quantum filtering and optimal feedback control of a Gaussian quantum free particle / S. C. Edwards and V. P. Belavkin -- On existence of quantum zeno dynamics / P. Exner and T. Ichinose -- Invariant subspaces and control of decoherence / P. Facchi, V. L. Lepore and S. Pascazio -- Clauser-Horner inequality for electron counting statistics in multiterminal mesoscopic conductors / L. Faoro, F. Taddei and R. Fazio -- Fidelity of quantum teleportation model using beam splittings / K.-H. Fichtner, T. Miyadera and M. Ohya -- Quantum logical gates realized by beam splittings / W. Freudenberg ... [et al.] -- Information divergence for quantum channels / S. J. Hammersley and V. P. Belavkin -- On the uniqueness theorem in quantum information geometry / H. Hasegawa -- Noncanonical representations of a multi-dimensional Brownian motion / Y. Hibino -- Some of future directions of white noise theory / T. Hida -- Information, innovation and elemental random field / T. Hida -- Generalized quantum turing machine and its application to the SAT chaos algorithm / S. Iriyama, M. Ohya and I. Volovich -- A Stroboscopic approach to quantum tomography / A. Jamiolkowski -- Positive maps and separable states in matrix algebras / A. Kossakowski -- Simulating open quantum systems with trapped ions / S. Maniscalco -- A purification scheme and entanglement distillations / H. Nakazato, M. Unoki and K. Yuasa -- Generalized sectors and adjunctions to control micro-macro transitions / I. Ojima -- Saturation of an entropy bound and quantum Markov states / D. Petz -- An

  17. The new landscape of parallel computer architecture

    NASA Astrophysics Data System (ADS)

    Shalf, John

    2007-07-01

    The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models.

  18. Quantum computation and hidden variables

    NASA Astrophysics Data System (ADS)

    Aristov, V. V.; Nikulov, A. V.

    2008-03-01

    Many physicists limit oneself to an instrumentalist description of quantum phenomena and ignore the problems of foundation and interpretation of quantum mechanics. This instrumentalist approach results to "specialization barbarism" and mass delusion concerning the problem, how a quantum computer can be made. The idea of quantum computation can be described within the limits of quantum formalism. But in order to understand how this idea can be put into practice one should realize the question: "What could the quantum formalism describe?", in spite of the absence of an universally recognized answer. Only a realization of this question and the undecided problem of quantum foundations allows to see in which quantum systems the superposition and EPR correlation could be expected. Because of the "specialization barbarism" many authors are sure that Bell proved full impossibility of any hidden-variables interpretation. Therefore it is important to emphasize that in reality Bell has restricted to validity limits of the no-hidden-variables proof and has shown that two-state quantum system can be described by hidden variables. The later means that no experimental result obtained on two-state quantum system can prove the existence of superposition and violation of the realism. One should not assume before unambiguous experimental evidence that any two-state quantum system is quantum bit. No experimental evidence of superposition of macroscopically distinct quantum states and of a quantum bit on base of superconductor structure was obtained for the present. Moreover same experimental results can not be described in the limits of the quantum formalism.

  19. Quantum computation using geometric algebra

    NASA Astrophysics Data System (ADS)

    Matzke, Douglas James

    This dissertation reports that arbitrary Boolean logic equations and operators can be represented in geometric algebra as linear equations composed entirely of orthonormal vectors using only addition and multiplication Geometric algebra is a topologically based algebraic system that naturally incorporates the inner and anticommutative outer products into a real valued geometric product, yet does not rely on complex numbers or matrices. A series of custom tools was designed and built to simplify geometric algebra expressions into a standard sum of products form, and automate the anticommutative geometric product and operations. Using this infrastructure, quantum bits (qubits), quantum registers and EPR-bits (ebits) are expressed symmetrically as geometric algebra expressions. Many known quantum computing gates, measurement operators, and especially the Bell/magic operators are also expressed as geometric products. These results demonstrate that geometric algebra can naturally and faithfully represent the central concepts, objects, and operators necessary for quantum computing, and can facilitate the design and construction of quantum computing tools.

  20. Cryptography, quantum computation and trapped ions

    SciTech Connect

    Hughes, Richard J.

    1998-03-01

    The significance of quantum computation for cryptography is discussed. Following a brief survey of the requirements for quantum computational hardware, an overview of the ion trap quantum computation project at Los Alamos is presented. The physical limitations to quantum computation with trapped ions are analyzed and an assessment of the computational potential of the technology is made.

  1. Using computer algebra in quantum computation and quantum games

    NASA Astrophysics Data System (ADS)

    Bolívar, David A.

    2011-05-01

    Research in contemporary physics is emphasizing the development and evolution of computer systems to facilitate the calculations. Quantum computing is a branch of modern physics is believed promising results for the future, Thanks to the ability of qubits to store more information than a bit. The work of this paper focuses on the simulation of certain quantum algorithms such as the prisoner's dilemma in its quantum version using the MATHEMATICA® software and implementing stochastic version of the software MAPLE ® and the Grover search algorithm that simulates finding a needle in a haystack.

  2. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  3. Computing on Knights and Kepler Architectures

    NASA Astrophysics Data System (ADS)

    Bortolotti, G.; Caberletti, M.; Crimi, G.; Ferraro, A.; Giacomini, F.; Manzali, M.; Maron, G.; Pivanti, M.; Salomoni, D.; Schifano, S. F.; Tripiccione, R.; Zanella, M.

    2014-06-01

    A recent trend in scientific computing is the increasingly important role of co-processors, originally built to accelerate graphics rendering, and now used for general high-performance computing. The INFN Computing On Knights and Kepler Architectures (COKA) project focuses on assessing the suitability of co-processor boards for scientific computing in a wide range of physics applications, and on studying the best programming methodologies for these systems. Here we present in a comparative way our results in porting a Lattice Boltzmann code on two state-of-the-art accelerators: the NVIDIA K20X, and the Intel Xeon-Phi. We describe our implementations, analyze results and compare with a baseline architecture adopting Intel Sandy Bridge CPUs.

  4. Quantum chromodynamics with advanced computing

    SciTech Connect

    Kronfeld, Andreas S.; /Fermilab

    2008-07-01

    We survey results in lattice quantum chromodynamics from groups in the USQCD Collaboration. The main focus is on physics, but many aspects of the discussion are aimed at an audience of computational physicists.

  5. The quantum computer game: citizen science

    NASA Astrophysics Data System (ADS)

    Damgaard, Sidse; Mølmer, Klaus; Sherson, Jacob

    2013-05-01

    Progress in the field of quantum computation is hampered by daunting technical challenges. Here we present an alternative approach to solving these by enlisting the aid of computer players around the world. We have previously examined a quantum computation architecture involving ultracold atoms in optical lattices and strongly focused tweezers of light. In The Quantum Computer Game (see http://www.scienceathome.org/), we have encapsulated the time-dependent Schrödinger equation for the problem in a graphical user interface allowing for easy user input. Players can then search the parameter space with real-time graphical feedback in a game context with a global high-score that rewards short gate times and robustness to experimental errors. The game which is still in a demo version has so far been tried by several hundred players. Extensions of the approach to other models such as Gross-Pitaevskii and Bose-Hubbard are currently under development. The game has also been incorporated into science education at high-school and university level as an alternative method for teaching quantum mechanics. Initial quantitative evaluation results are very positive. AU Ideas Center for Community Driven Research, CODER.

  6. Evaluation of Visual Computer Simulator for Computer Architecture Education

    ERIC Educational Resources Information Center

    Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio

    2013-01-01

    This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…

  7. Highly parallel computer architecture for robotic computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Anta K. (Inventor)

    1991-01-01

    In a computer having a large number of single instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.

  8. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    SciTech Connect

    Larsen, R.S.; /SLAC

    2008-04-22

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

  9. Implementing a computing architecture with WISDOM

    SciTech Connect

    Zebrowski, J.R.

    1991-01-01

    Over the past two years, the Savannah River Site (SRS) work force has expanded by more than 6000 employees. This large influx of personnel, in conjunction with the limited office space, has resulted in an overcrowding problem on site. To alleviate some of the overcrowding, Westinghouse Savannah River Company (WSRC) has been in the process of leasing space from several office buildings within Aiken, SC. Brookhaven, the latest off-site office building to be leased, is the starting point for a new direction in office automation which will eventually spread throughout SRS. The computing architecture in place at Brookhaven was designed to adhere to the SRS computer architecture guidelines as published by the WSRC Computer Architecture Standards Team (CAST). At the heart of the Brookhaven implementation is a Workstation Integration System for DOS, OS/2 and Macintosh (WISDOM). The key features of the WISDOM system include: it's utilization of a Local Area Network (LAN), it's Graphical User Interface (GUI), it's cross-platform capability, it's portable user interface, and the installation program. To begin, I will give an overview of the network architecture, then discuss WISDOM in detail, mention some platform integration problems that need to be addressed and conclude with a summary of the user benefits that WISDOM provides.

  10. Implementing a computing architecture with WISDOM

    SciTech Connect

    Zebrowski, J.R.

    1991-12-31

    Over the past two years, the Savannah River Site (SRS) work force has expanded by more than 6000 employees. This large influx of personnel, in conjunction with the limited office space, has resulted in an overcrowding problem on site. To alleviate some of the overcrowding, Westinghouse Savannah River Company (WSRC) has been in the process of leasing space from several office buildings within Aiken, SC. Brookhaven, the latest off-site office building to be leased, is the starting point for a new direction in office automation which will eventually spread throughout SRS. The computing architecture in place at Brookhaven was designed to adhere to the SRS computer architecture guidelines as published by the WSRC Computer Architecture Standards Team (CAST). At the heart of the Brookhaven implementation is a Workstation Integration System for DOS, OS/2 and Macintosh (WISDOM). The key features of the WISDOM system include: it`s utilization of a Local Area Network (LAN), it`s Graphical User Interface (GUI), it`s cross-platform capability, it`s portable user interface, and the installation program. To begin, I will give an overview of the network architecture, then discuss WISDOM in detail, mention some platform integration problems that need to be addressed and conclude with a summary of the user benefits that WISDOM provides.

  11. Computer graphics in architecture and engineering

    NASA Technical Reports Server (NTRS)

    Greenberg, D. P.

    1975-01-01

    The present status of the application of computer graphics to the building profession or architecture and its relationship to other scientific and technical areas were discussed. It was explained that, due to the fragmented nature of architecture and building activities (in contrast to the aerospace industry), a comprehensive, economic utilization of computer graphics in this area is not practical and its true potential cannot now be realized due to the present inability of architects and structural, mechanical, and site engineers to rely on a common data base. Future emphasis will therefore have to be placed on a vertical integration of the construction process and effective use of a three-dimensional data base, rather than on waiting for any technological breakthrough in interactive computing.

  12. Efficient tree codes on SIMD computer architectures

    NASA Astrophysics Data System (ADS)

    Olson, Kevin M.

    1996-11-01

    This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.

  13. Towards optoelectronic architectures for integrated neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Martinenghi, Romain; Baylon Fuentes, Antonio; Jacquot, Maxime; Chembo, Yanne K.; Larger, Laurent

    2014-03-01

    We investigate theoretically and experimentally the computational properties of an optoelectronic neuromorphic processor based on a complex nonlinear dynamics. This neuromorphic approach is based on a new paradigm of or reservoir computing, which is intrinsically different from the concept of Turing machines. It essentially consists in expanding the input information to be processed into a higher dimensional phase space, through the nonlinear transient response of a complex dynamics excited by the input information. The computed output is then extracted via a linear separation of the transient trajectory in the complex phase space, performed through a learning phase consisting of the resolution of a regression problem. We here investigate an architecture for photonic neuromorphic computing via these complex nonlinear dynamical transients. A versatile photonic nonlinear transient computer based on a multiple-delay is reported. Its hybrid analogue and digital architecture allows for an easy reconfiguration, and for direct implementation of in-line processing. Its computational efficiency in parameter space is also analyzed, and the computational performance of this system is successfully evaluated on a standard spoken digit recognition task. We then discuss the pathways that can lead to its effective integration.

  14. Pfaffian States: Quantum Computation

    SciTech Connect

    Shrivastava, Keshav N.

    2009-09-14

    The Pfaffian determinant is sometimes used to multiply the Laughlin's wave function at the half filled Landau level. The square of the Pfaffian gives the ordinary determinant. We find that the Pfaffian wave function leads to four times larger energies and two times faster time. By the same logic, the Pfaffian breaks the supersymmetry of the Dirac equation. By using the spin properties and the Landau levels, we correctly interpret the state with 5/2 filling. The quantum numbers which represent the state vectors are now products of n (Landau level quantum number), l(orbital angular momentum quantum number and the spin, s |n, l, s>. In a circuit, the noise measures the resistivity and hence the charge. The Pfaffian velocity is different from that of the single-particle states and hence it has important consequences in the measurement of the charge of the quasiparticles.

  15. Roadmap to the SRS computing architecture

    SciTech Connect

    Johnson, A.

    1994-07-05

    This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

  16. Scalable computer architecture for digital vascular systems

    NASA Astrophysics Data System (ADS)

    Goddard, Iain; Chao, Hui; Skalabrin, Mark

    1998-06-01

    Digital vascular computer systems are used for radiology and fluoroscopy (R/F), angiography, and cardiac applications. In the United States alone, about 26 million procedures of these types are performed annually: about 81% R/F, 11% cardiac, and 8% angiography. Digital vascular systems have a very wide range of performance requirements, especially in terms of data rates. In addition, new features are added over time as they are shown to be clinically efficacious. Application-specific processing modes such as roadmapping, peak opacification, and bolus chasing are particular to some vascular systems. New algorithms continue to be developed and proven, such as Cox and deJager's precise registration methods for masks and live images in digital subtraction angiography. A computer architecture must have high scalability and reconfigurability to meet the needs of this modality. Ideally, the architecture could also serve as the basis for a nonvascular R/F system.

  17. Quantumness, Randomness and Computability

    NASA Astrophysics Data System (ADS)

    Solis, Aldo; Hirsch, Jorge G.

    2015-06-01

    Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.

  18. Nonlinear hierarchical substructural parallelism and computer architecture

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1989-01-01

    Computer architecture is investigated in conjunction with the algorithmic structures of nonlinear finite-element analysis. To help set the stage for this goal, the development is undertaken by considering the wide-ranging needs associated with the analysis of rolling tires which possess the full range of kinematic, material and boundary condition induced nonlinearity in addition to gross and local cord-matrix material properties.

  19. Maximum density of quantum information in a scalable CMOS implementation of the hybrid qubit architecture

    NASA Astrophysics Data System (ADS)

    Rotta, Davide; De Michielis, Marco; Ferraro, Elena; Fanciulli, Marco; Prati, Enrico

    2016-03-01

    Scalability from single-qubit operations to multi-qubit circuits for quantum information processing requires architecture-specific implementations. Semiconductor hybrid qubit architecture is a suitable candidate to realize large-scale quantum information processing, as it combines a universal set of logic gates with fast and all-electrical manipulation of qubits. We propose an implementation of hybrid qubits, based on Si metal-oxide-semiconductor (MOS) quantum dots, compatible with the CMOS industrial technological standards. We discuss the realization of multi-qubit circuits capable of fault-tolerant computation and quantum error correction, by evaluating the time and space resources needed for their implementation. As a result, the maximum density of quantum information is extracted from a circuit including eight logical qubits encoded by the [[7, 1, 3

  20. Maximum density of quantum information in a scalable CMOS implementation of the hybrid qubit architecture

    NASA Astrophysics Data System (ADS)

    Rotta, Davide; De Michielis, Marco; Ferraro, Elena; Fanciulli, Marco; Prati, Enrico

    2016-06-01

    Scalability from single-qubit operations to multi-qubit circuits for quantum information processing requires architecture-specific implementations. Semiconductor hybrid qubit architecture is a suitable candidate to realize large-scale quantum information processing, as it combines a universal set of logic gates with fast and all-electrical manipulation of qubits. We propose an implementation of hybrid qubits, based on Si metal-oxide-semiconductor (MOS) quantum dots, compatible with the CMOS industrial technological standards. We discuss the realization of multi-qubit circuits capable of fault-tolerant computation and quantum error correction, by evaluating the time and space resources needed for their implementation. As a result, the maximum density of quantum information is extracted from a circuit including eight logical qubits encoded by the [[7, 1, 3

  1. Fast semivariogram computation using FPGA architectures

    NASA Astrophysics Data System (ADS)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments

  2. Atomic physics: A milestone in quantum computing

    NASA Astrophysics Data System (ADS)

    Bartlett, Stephen D.

    2016-08-01

    Quantum computers require many quantum bits to perform complex calculations, but devices with more than a few bits are difficult to program. A device based on five atomic quantum bits shows a way forward. See Letter p.63

  3. Quantum Computing and Number Theory

    NASA Astrophysics Data System (ADS)

    Sasaki, Yoshitaka

    2013-09-01

    The prime factorization can be efficiently solved on a quantum computer. This result was given by Shor in 1994. In the first half of this article, a review of Shor's algorithm with mathematical setups is given. In the second half of this article, the prime number theorem which is an essential tool to understand the distribution of prime numbers is given.

  4. Continuous-Variable Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2012-12-01

    Blind quantum computation is a secure delegated quantum computing protocol where Alice, who does not have sufficient quantum technology at her disposal, delegates her computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice’s input, output, and algorithm. Protocols of blind quantum computation have been proposed for several qudit measurement-based computation models, such as the graph state model, the Affleck-Kennedy-Lieb-Tasaki model, and the Raussendorf-Harrington-Goyal topological model. Here, we consider blind quantum computation for the continuous-variable measurement-based model. We show that blind quantum computation is possible for the infinite squeezing case. We also show that the finite squeezing causes no additional problem in the blind setup apart from the one inherent to the continuous-variable measurement-based quantum computation.

  5. ASCR Workshop on Quantum Computing for Science

    SciTech Connect

    Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

    2015-06-01

    This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

  6. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  7. Hybrid VLSI/QCA Architecture for Computing FFTs

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  8. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    ERIC Educational Resources Information Center

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  9. General Quantum Interference Principle and Duality Computer

    NASA Astrophysics Data System (ADS)

    Long, Gui-Lu

    2006-05-01

    In this article, we propose a general principle of quantum interference for quantum system, and based on this we propose a new type of computing machine, the duality computer, that may outperform in principle both classical computer and the quantum computer. According to the general principle of quantum interference, the very essence of quantum interference is the interference of the sub-waves of the quantum system itself. A quantum system considered here can be any quantum system: a single microscopic particle, a composite quantum system such as an atom or a molecule, or a loose collection of a few quantum objects such as two independent photons. In the duality computer, the wave of the duality computer is split into several sub-waves and they pass through different routes, where different computing gate operations are performed. These sub-waves are then re-combined to interfere to give the computational results. The quantum computer, however, has only used the particle nature of quantum object. In a duality computer, it may be possible to find a marked item from an unsorted database using only a single query, and all NP-complete problems may have polynomial algorithms. Two proof-of-the-principle designs of the duality computer are presented: the giant molecule scheme and the nonlinear quantum optics scheme. We also propose thought experiment to check the related fundamental issues, the measurement efficiency of a partial wave function.

  10. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  11. Hierarchical Poly Tree computer architectures defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    This paper will develop an alternative computer architecture called the Poly Tree. Based on the requirements of computational mechanics and the concept of hierarchical substructuring, the paper will explore the development of problem-dependent parallel networks of processors which will enable significant, often superlinear, speed enhancements; provide a logical/efficient framework for linear/nonlinear and transient structural mechanics problems; and provide a logical framework from which to apply model reduction procedures. In addition, the paper will explore optimal processor arrangements which define the overall system granularity. Consideration will also be given to system I/O requirements.

  12. Quantum computing with parafermions

    NASA Astrophysics Data System (ADS)

    Hutter, Adrian; Loss, Daniel

    2016-03-01

    Zd parafermions are exotic non-Abelian quasiparticles generalizing Majorana fermions, which correspond to the case d =2 . In contrast to Majorana fermions, braiding of parafermions with d >2 allows one to perform an entangling gate. This has spurred interest in parafermions, and a variety of condensed matter systems have been proposed as potential hosts for them. In this work, we study the computational power of braiding parafermions more systematically. We make no assumptions on the underlying physical model but derive all our results from the algebraical relations that define parafermions. We find a family of 2 d representations of the braid group that are compatible with these relations. The braiding operators derived this way reproduce those derived previously from physical grounds as special cases. We show that if a d -level qudit is encoded in the fusion space of four parafermions, braiding of these four parafermions allows one to generate the entire single-qudit Clifford group (up to phases), for any d . If d is odd, then we show that in fact the entire many-qudit Clifford group can be generated.

  13. Brain Neurons as Quantum Computers:

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.; Dremencov, E.; Bershadskii, J.; Yadid, G.

    The question: whether quantum coherent states can sustain decoherence, heating and dissipation over time scales comparable to the dynamical timescales of brain neurons, has been actively discussed in the last years. A positive answer on this question is crucial, in particular, for consideration of brain neurons as quantum computers. This discussion was mainly based on theoretical arguments. In the present paper nonlinear statistical properties of the Ventral Tegmental Area (VTA) of genetically depressive limbic brain are studied in vivo on the Flinders Sensitive Line of rats (FSL). VTA plays a key role in the generation of pleasure and in the development of psychological drug addiction. We found that the FSL VTA (dopaminergic) neuron signals exhibit multifractal properties for interspike frequencies on the scales where healthy VTA dopaminergic neurons exhibit bursting activity. For high moments the observed multifractal (generalized dimensions) spectrum coincides with the generalized dimensions spectrum calculated for a spectral measure of a quantum system (so-called kicked Harper model, actively used as a model of quantum chaos). This observation can be considered as a first experimental (in vivo) indication in the favor of the quantum (at least partially) nature of brain neurons activity.

  14. Quantum dissonance and deterministic quantum computation with a single qubit

    NASA Astrophysics Data System (ADS)

    Ali, Mazhar

    2014-11-01

    Mixed state quantum computation can perform certain tasks which are believed to be efficiently intractable on a classical computer. For a specific model of mixed state quantum computation, namely, deterministic quantum computation with a single qubit (DQC1), recent investigations suggest that quantum correlations other than entanglement might be responsible for the power of DQC1 model. However, strictly speaking, the role of entanglement in this model of computation was not entirely clear. We provide conclusive evidence that there are instances where quantum entanglement is not present in any part of this model, nevertheless we have advantage over classical computation. This establishes the fact that quantum dissonance (a kind of quantum correlations) present in fully separable (FS) states provide power to DQC1 model.

  15. Scalable digital hardware for a trapped ion quantum computer

    NASA Astrophysics Data System (ADS)

    Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang

    2015-09-01

    Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.

  16. Geometry of discrete quantum computing

    NASA Astrophysics Data System (ADS)

    Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr; Tai, Yu-Tsung

    2013-05-01

    Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2n infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields \\mathbf {F}_{p^2} (based on primes p congruent to 3 (mod4)) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space \\mathbf {CP}^{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p + 1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to \\mathbf {DCP}^{2^{n}-1}, the discrete analogue of the complex projective space, which has p^{2^{n}-1} (p-1)\\,\\prod _{k=1}^{n-1} ( p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field \\mathbf {F}_{p^2} have pn(p - 1)n unentangled states (the product of the tally for a single qubit) with purity 1, and they have pn + 1(p - 1)(p + 1)n - 1 maximally entangled states with purity zero.

  17. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  18. Computational multiqubit tunnelling in programmable quantum annealers.

    PubMed

    Boixo, Sergio; Smelyanskiy, Vadim N; Shabani, Alireza; Isakov, Sergei V; Dykman, Mark; Denchev, Vasil S; Amin, Mohammad H; Smirnov, Anatoly Yu; Mohseni, Masoud; Neven, Hartmut

    2016-01-01

    Quantum tunnelling is a phenomenon in which a quantum state traverses energy barriers higher than the energy of the state itself. Quantum tunnelling has been hypothesized as an advantageous physical resource for optimization in quantum annealing. However, computational multiqubit tunnelling has not yet been observed, and a theory of co-tunnelling under high- and low-frequency noises is lacking. Here we show that 8-qubit tunnelling plays a computational role in a currently available programmable quantum annealer. We devise a probe for tunnelling, a computational primitive where classical paths are trapped in a false minimum. In support of the design of quantum annealers we develop a nonperturbative theory of open quantum dynamics under realistic noise characteristics. This theory accurately predicts the rate of many-body dissipative quantum tunnelling subject to the polaron effect. Furthermore, we experimentally demonstrate that quantum tunnelling outperforms thermal hopping along classical paths for problems with up to 200 qubits containing the computational primitive. PMID:26739797

  19. Computational multiqubit tunnelling in programmable quantum annealers

    PubMed Central

    Boixo, Sergio; Smelyanskiy, Vadim N.; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Denchev, Vasil S.; Amin, Mohammad H.; Smirnov, Anatoly Yu; Mohseni, Masoud; Neven, Hartmut

    2016-01-01

    Quantum tunnelling is a phenomenon in which a quantum state traverses energy barriers higher than the energy of the state itself. Quantum tunnelling has been hypothesized as an advantageous physical resource for optimization in quantum annealing. However, computational multiqubit tunnelling has not yet been observed, and a theory of co-tunnelling under high- and low-frequency noises is lacking. Here we show that 8-qubit tunnelling plays a computational role in a currently available programmable quantum annealer. We devise a probe for tunnelling, a computational primitive where classical paths are trapped in a false minimum. In support of the design of quantum annealers we develop a nonperturbative theory of open quantum dynamics under realistic noise characteristics. This theory accurately predicts the rate of many-body dissipative quantum tunnelling subject to the polaron effect. Furthermore, we experimentally demonstrate that quantum tunnelling outperforms thermal hopping along classical paths for problems with up to 200 qubits containing the computational primitive. PMID:26739797

  20. Universal quantum computation using the discrete-time quantum walk

    SciTech Connect

    Lovett, Neil B.; Cooper, Sally; Everitt, Matthew; Trevers, Matthew; Kendon, Viv

    2010-04-15

    A proof that continuous-time quantum walks are universal for quantum computation, using unweighted graphs of low degree, has recently been presented by A. M. Childs [Phys. Rev. Lett. 102, 180501 (2009)]. We present a version based instead on the discrete-time quantum walk. We show that the discrete-time quantum walk is able to implement the same universal gate set and thus both discrete and continuous-time quantum walks are computational primitives. Additionally, we give a set of components on which the discrete-time quantum walk provides perfect state transfer.

  1. Nanophotonic quantum computer based on atomic quantum transistor

    NASA Astrophysics Data System (ADS)

    Andrianov, S. N.; Moiseev, S. A.

    2015-10-01

    We propose a scheme of a quantum computer based on nanophotonic elements: two buses in the form of nanowaveguide resonators, two nanosized units of multiatom multiqubit quantum memory and a set of nanoprocessors in the form of photonic quantum transistors, each containing a pair of nanowaveguide ring resonators coupled via a quantum dot. The operation modes of nanoprocessor photonic quantum transistors are theoretically studied and the execution of main logical operations by means of them is demonstrated. We also discuss the prospects of the proposed nanophotonic quantum computer for operating in high-speed optical fibre networks.

  2. The Quantum Human Computer (QHC) Hypothesis

    ERIC Educational Resources Information Center

    Salmani-Nodoushan, Mohammad Ali

    2008-01-01

    This article attempts to suggest the existence of a human computer called Quantum Human Computer (QHC) on the basis of an analogy between human beings and computers. To date, there are two types of computers: Binary and Quantum. The former operates on the basis of binary logic where an object is said to exist in either of the two states of 1 and…

  3. Advanced computer architecture specification for automated weld systems

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  4. Fast two-qubit gates for quantum computing in semiconductor quantum dots using a photonic microcavity

    NASA Astrophysics Data System (ADS)

    Solenov, Dmitry; Economou, Sophia E.; Reinecke, T. L.

    2013-01-01

    Implementations for quantum computing require fast single- and multiqubit quantum gate operations. In the case of optically controlled quantum dot qubits, theoretical designs for long-range two- or multiqubit operations satisfying all the requirements in quantum computing are not yet available. We have developed a design for a fast, long-range two-qubit gate mediated by a photonic microcavity mode using excited states of the quantum-dot-cavity system that addresses these needs. This design does not require identical qubits, it is compatible with available optically induced single-qubit operations, and it advances opportunities for scalable architectures. We show that the gate fidelity can exceed 90% in experimentally accessible systems.

  5. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  6. Non-unitary probabilistic quantum computing

    NASA Technical Reports Server (NTRS)

    Gingrich, Robert M.; Williams, Colin P.

    2004-01-01

    We present a method for designing quantum circuits that perform non-unitary quantum computations on n-qubit states probabilistically, and give analytic expressions for the success probability and fidelity.

  7. Zeno effect for quantum computation and control.

    PubMed

    Paz-Silva, Gerardo A; Rezakhani, A T; Dominy, Jason M; Lidar, D A

    2012-02-24

    It is well known that the quantum Zeno effect can protect specific quantum states from decoherence by using projective measurements. Here we combine the theory of weak measurements with stabilizer quantum error correction and detection codes. We derive rigorous performance bounds which demonstrate that the Zeno effect can be used to protect appropriately encoded arbitrary states to arbitrary accuracy while at the same time allowing for universal quantum computation or quantum control. PMID:22463507

  8. Architecture and applications of the HEP multiprocessor computer system

    SciTech Connect

    Smith, B.J.

    1981-01-01

    The HEP computer system is a large scale scientific parallel computer employing shared-resource MIMD architecture. The hardware and software facilities provided by the system are described, and techniques found useful in programming the system are discussed. 3 references.

  9. Blind topological measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-09-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3×10-3, which is comparable to that (7.5×10-3) of non-blind topological quantum computation. As the error per gate of the order 10-3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  10. Contextuality supplies the `magic' for quantum computation

    NASA Astrophysics Data System (ADS)

    Howard, Mark; Wallman, Joel; Veitch, Victor; Emerson, Joseph

    2014-06-01

    Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via `magic state' distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple `hidden variable' model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms.

  11. Contextuality supplies the 'magic' for quantum computation.

    PubMed

    Howard, Mark; Wallman, Joel; Veitch, Victor; Emerson, Joseph

    2014-06-19

    Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via 'magic state' distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple 'hidden variable' model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms. PMID:24919152

  12. Quantum Computation: Theory, Practice, and Future Prospects

    NASA Astrophysics Data System (ADS)

    Chuang, Isaac

    2000-03-01

    Information is physical, and computation obeys physical laws. Ones and zeros -- elementary classical bits of information -- must be represented in physical media to be stored and processed. Traditionally, these objects are well described by classical physics, but increasingly, as we edge towards the limits of semiconductor technology, we reach a new regime where the laws of quantum physics become dominant. Strange new phenomena, like entanglement and quantum coherence, become available as new resources. How can such resources be utilized for computation? What physical systems allow construction and control of quantum phenomena? How is this relevant to future directions in information technology? The theoretical promise of quantum computation is polynomial speedup of searches, and exponentially speedups for other certain problems such as factoring. But the experimental challenge to realize such algorithms in practice is enormous: to date, quantum computers with only a handful of quantum bits have been realized in the laboratory, using electromagnetically trapped ions, and with magnetic resonance techniques. On the other hand, quantum information has been communicated over long distances using single photons. The future of quantum computation is currently subject to intense scrutiny. It may well be that these machines will not be practical. More quantum algorithms must be discovered, and new physical implementations must be realized. Quantum computation and quantum information are young fields with major issues to be overcome, but already, they have forever changed the way we think of the physical world and what can be computed with it.

  13. The Photon Shell Game and the Quantum von Neumann Architecture with Superconducting Circuits

    NASA Astrophysics Data System (ADS)

    Mariantoni, Matteo

    2012-02-01

    Superconducting quantum circuits have made significant advances over the past decade, allowing more complex and integrated circuits that perform with good fidelity. We have recently implemented a machine comprising seven quantum channels, with three superconducting resonators, two phase qubits, and two zeroing registers. I will explain the design and operation of this machine, first showing how a single microwave photon | 1 > can be prepared in one resonator and coherently transferred between the three resonators. I will also show how more exotic states such as double photon states | 2 > and superposition states | 0 >+ | 1 > can be shuffled among the resonators as well [1]. I will then demonstrate how this machine can be used as the quantum-mechanical analog of the von Neumann computer architecture, which for a classical computer comprises a central processing unit and a memory holding both instructions and data. The quantum version comprises a quantum central processing unit (quCPU) that exchanges data with a quantum random-access memory (quRAM) integrated on one chip, with instructions stored on a classical computer. I will also present a proof-of-concept demonstration of a code that involves all seven quantum elements: (1), Preparing an entangled state in the quCPU, (2), writing it to the quRAM, (3), preparing a second state in the quCPU, (4), zeroing it, and, (5), reading out the first state stored in the quRAM [2]. Finally, I will demonstrate that the quantum von Neumann machine provides one unit cell of a two-dimensional qubit-resonator array that can be used for surface code quantum computing. This will allow the realization of a scalable, fault-tolerant quantum processor with the most forgiving error rates to date. [4pt] [1] M. Mariantoni et al., Nature Physics 7, 287-293 (2011.)[0pt] [2] M. Mariantoni et al., Science 334, 61-65 (2011).

  14. Quantum computing. Defining and detecting quantum speedup.

    PubMed

    Rønnow, Troels F; Wang, Zhihui; Job, Joshua; Boixo, Sergio; Isakov, Sergei V; Wecker, David; Martinis, John M; Lidar, Daniel A; Troyer, Matthias

    2014-07-25

    The development of small-scale quantum devices raises the question of how to fairly assess and detect quantum speedup. Here, we show how to define and measure quantum speedup and how to avoid pitfalls that might mask or fake such a speedup. We illustrate our discussion with data from tests run on a D-Wave Two device with up to 503 qubits. By using random spin glass instances as a benchmark, we found no evidence of quantum speedup when the entire data set is considered and obtained inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results do not rule out the possibility of speedup for other classes of problems and illustrate the subtle nature of the quantum speedup question. PMID:25061205

  15. Prospects for quantum computation with trapped ions

    SciTech Connect

    Hughes, R.J.; James, D.F.V.

    1997-12-31

    Over the past decade information theory has been generalized to allow binary data to be represented by two-state quantum mechanical systems. (A single two-level system has come to be known as a qubit in this context.) The additional freedom introduced into information physics with quantum systems has opened up a variety of capabilities that go well beyond those of conventional information. For example, quantum cryptography allows two parties to generate a secret key even in the presence of eavesdropping. But perhaps the most remarkable capabilities have been predicted in the field of quantum computation. Here, a brief survey of the requirements for quantum computational hardware, and an overview of the in trap quantum computation project at Los Alamos are presented. The physical limitations to quantum computation with trapped ions are discussed.

  16. Molecular Realizations of Quantum Computing 2007

    NASA Astrophysics Data System (ADS)

    Nakahara, Mikio; Ota, Yukihiro; Rahimi, Robabeh; Kondo, Yasushi; Tada-Umezaki, Masahito

    2009-06-01

    Liquid-state NMR quantum computer: working principle and some examples / Y. Kondo -- Flux qubits, tunable coupling and beyond / A. O. Niskanen -- Josephson phase qubits, and quantum communication via a resonant cavity / M. A. Sillanpää -- Quantum computing using pulse-based electron-nuclear double resonance (ENDOR): molecular spin-qubits / K. Sato ... [et al.] -- Fullerene C[symbol]: a possible molecular quantum computer / T. Wakabayashi -- Molecular magnets for quantum computation / T. Kuroda -- Errors in a plausible scheme of quantum gates in Kane's model / Y. Ota -- Yet another formulation for quantum simultaneous noncooperative bimatrix games / A. SaiToh, R. Rahimi, M. Nakahara -- Continuous-variable teleportation of single-photon states and an accidental cloning of a photonic qubit in two-channel teleportation / T. Ide.

  17. Quantum Computational Logics and Possible Applications

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Leporini, Roberto; di Francia, Giuliano Toraldo

    2008-01-01

    In quantum computational logics meanings of formulas are identified with quantum information quantities: systems of qubits or, more generally, mixtures of systems of qubits. We consider two kinds of quantum computational semantics: (1) a compositional semantics, where the meaning of a compound formula is determined by the meanings of its parts; (2) a holistic semantics, which makes essential use of the characteristic “holistic” features of the quantum-theoretic formalism. The compositional and the holistic semantics turn out to characterize the same logic. In this framework, one can introduce the notion of quantum-classical truth table, which corresponds to the most natural way for a quantum computer to calculate classical tautologies. Quantum computational logics can be applied to investigate different kinds of semantic phenomena where holistic, contextual and gestaltic patterns play an essential role (from natural languages to musical compositions).

  18. Some Thoughts Regarding Practical Quantum Computing

    NASA Astrophysics Data System (ADS)

    Ghoshal, Debabrata; Gomez, Richard; Lanzagorta, Marco; Uhlmann, Jeffrey

    2006-03-01

    Quantum computing has become an important area of research in computer science because of its potential to provide more efficient algorithmic solutions to certain problems than are possible with classical computing. The ability of performing parallel operations over an exponentially large computational space has proved to be the main advantage of the quantum computing model. In this regard, we are particularly interested in the potential applications of quantum computers to enhance real software systems of interest to the defense, industrial, scientific and financial communities. However, while much has been written in popular and scientific literature about the benefits of the quantum computational model, several of the problems associated to the practical implementation of real-life complex software systems in quantum computers are often ignored. In this presentation we will argue that practical quantum computation is not as straightforward as commonly advertised, even if the technological problems associated to the manufacturing and engineering of large-scale quantum registers were solved overnight. We will discuss some of the frequently overlooked difficulties that plague quantum computing in the areas of memories, I/O, addressing schemes, compilers, oracles, approximate information copying, logical debugging, error correction and fault-tolerant computing protocols.

  19. The Heisenberg representation of quantum computers

    SciTech Connect

    Gottesman, D.

    1998-06-24

    Since Shor`s discovery of an algorithm to factor numbers on a quantum computer in polynomial time, quantum computation has become a subject of immense interest. Unfortunately, one of the key features of quantum computers--the difficulty of describing them on classical computers--also makes it difficult to describe and understand precisely what can be done with them. A formalism describing the evolution of operators rather than states has proven extremely fruitful in understanding an important class of quantum operations. States used in error correction and certain communication protocols can be described by their stabilizer, a group of tensor products of Pauli matrices. Even this simple group structure is sufficient to allow a rich range of quantum effects, although it falls short of the full power of quantum computation.

  20. Surface code quantum computing by lattice surgery

    NASA Astrophysics Data System (ADS)

    Horsman, Clare; Fowler, Austin G.; Devitt, Simon; Van Meter, Rodney

    2012-12-01

    In recent years, surface codes have become a leading method for quantum error correction in theoretical large-scale computational and communications architecture designs. Their comparatively high fault-tolerant thresholds and their natural two-dimensional nearest-neighbour (2DNN) structure make them an obvious choice for large scale designs in experimentally realistic systems. While fundamentally based on the toric code of Kitaev, there are many variants, two of which are the planar- and defect-based codes. Planar codes require fewer qubits to implement (for the same strength of error correction), but are restricted to encoding a single qubit of information. Interactions between encoded qubits are achieved via transversal operations, thus destroying the inherent 2DNN nature of the code. In this paper we introduce a new technique enabling the coupling of two planar codes without transversal operations, maintaining the 2DNN of the encoded computer. Our lattice surgery technique comprises splitting and merging planar code surfaces, and enables us to perform universal quantum computation (including magic state injection) while removing the need for braided logic in a strictly 2DNN design, and hence reduces the overall qubit resources for logic operations. Those resources are further reduced by the use of a rotated lattice for the planar encoding. We show how lattice surgery allows us to distribute encoded GHZ states in a more direct (and overhead friendly) manner, and how a demonstration of an encoded CNOT between two distance-3 logical states is possible with 53 physical qubits, half of that required in any other known construction in 2D.

  1. Architecture dependence of photon antibunching in cavity quantum electrodynamics

    NASA Astrophysics Data System (ADS)

    Bradford, Matthew; Shen, Jung-Tsung

    2015-08-01

    We investigate numerically the architecture dependence of the characteristics of antibunched photons generated in cavity quantum electrodynamic systems. We show that the quality of antibunching [the smallness of the second-order intensity correlation function at zero time g(2 )(0 ) ] and the generation efficiency significantly depend on the configurations: the arrangements of single-mode optical cavities and waveguides. We found that for certain class of architecture, when the Jaynes-Cummings system (the atom-cavity system) couples to two terminated waveguides, there exists a fundamental tradeoff between high transmission and low g(2 )(0 ) , and is sensitive to dissipation. We further show that optimal antibunching can be achieved in two alternative cavity quantum electrodynamic configurations operating in the dissipatively weak coupling regime such that the two-photon transmission can be two orders of magnitude higher for the same g(2 )(0 ) .

  2. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  3. Quantum Computer Games: Schrodinger Cat and Hounds

    ERIC Educational Resources Information Center

    Gordon, Michal; Gordon, Goren

    2012-01-01

    The quantum computer game "Schrodinger cat and hounds" is the quantum extension of the well-known classical game fox and hounds. Its main objective is to teach the unique concepts of quantum mechanics in a fun way. "Schrodinger cat and hounds" demonstrates the effects of superposition, destructive and constructive interference, measurements and…

  4. Innovative architectures for dense multi-microprocessor computers

    NASA Technical Reports Server (NTRS)

    Donaldson, Thomas; Doty, Karl; Engle, Steven W.; Larson, Robert E.; O'Reilly, John G.

    1988-01-01

    The results of a Phase I Small Business Innovative Research (SBIR) project performed for the NASA Langley Computational Structural Mechanics Group are described. The project resulted in the identification of a family of chordal-ring interconnection architectures with excellent potential to serve as the basis for new multimicroprocessor (MMP) computers. The paper presents examples of how computational algorithms from structural mechanics can be efficiently implemented on the chordal-ring architecture.

  5. Computational quantum-classical boundary of noisy commuting quantum circuits.

    PubMed

    Fujii, Keisuke; Tamate, Shuhei

    2016-01-01

    It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region. PMID:27189039

  6. Computational quantum-classical boundary of noisy commuting quantum circuits

    PubMed Central

    Fujii, Keisuke; Tamate, Shuhei

    2016-01-01

    It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region. PMID:27189039

  7. Computational quantum-classical boundary of noisy commuting quantum circuits

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Tamate, Shuhei

    2016-05-01

    It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region.

  8. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools

  9. Digital quantum simulators in a scalable architecture of hybrid spin-photon qubits.

    PubMed

    Chiesa, Alessandro; Santini, Paolo; Gerace, Dario; Raftery, James; Houck, Andrew A; Carretta, Stefano

    2015-01-01

    Resolving quantum many-body problems represents one of the greatest challenges in physics and physical chemistry, due to the prohibitively large computational resources that would be required by using classical computers. A solution has been foreseen by directly simulating the time evolution through sequences of quantum gates applied to arrays of qubits, i.e. by implementing a digital quantum simulator. Superconducting circuits and resonators are emerging as an extremely promising platform for quantum computation architectures, but a digital quantum simulator proposal that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is presently lacking. Here we propose a viable scheme to implement a universal quantum simulator with hybrid spin-photon qubits in an array of superconducting resonators, which is intrinsically scalable and allows for local control. As representative examples we consider the transverse-field Ising model, a spin-1 Hamiltonian, and the two-dimensional Hubbard model and we numerically simulate the scheme by including the main sources of decoherence. PMID:26563516

  10. Digital quantum simulators in a scalable architecture of hybrid spin-photon qubits

    PubMed Central

    Chiesa, Alessandro; Santini, Paolo; Gerace, Dario; Raftery, James; Houck, Andrew A.; Carretta, Stefano

    2015-01-01

    Resolving quantum many-body problems represents one of the greatest challenges in physics and physical chemistry, due to the prohibitively large computational resources that would be required by using classical computers. A solution has been foreseen by directly simulating the time evolution through sequences of quantum gates applied to arrays of qubits, i.e. by implementing a digital quantum simulator. Superconducting circuits and resonators are emerging as an extremely promising platform for quantum computation architectures, but a digital quantum simulator proposal that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is presently lacking. Here we propose a viable scheme to implement a universal quantum simulator with hybrid spin-photon qubits in an array of superconducting resonators, which is intrinsically scalable and allows for local control. As representative examples we consider the transverse-field Ising model, a spin-1 Hamiltonian, and the two-dimensional Hubbard model and we numerically simulate the scheme by including the main sources of decoherence. PMID:26563516

  11. Digital quantum simulators in a scalable architecture of hybrid spin-photon qubits

    NASA Astrophysics Data System (ADS)

    Chiesa, Alessandro; Santini, Paolo; Gerace, Dario; Raftery, James; Houck, Andrew A.; Carretta, Stefano

    2015-11-01

    Resolving quantum many-body problems represents one of the greatest challenges in physics and physical chemistry, due to the prohibitively large computational resources that would be required by using classical computers. A solution has been foreseen by directly simulating the time evolution through sequences of quantum gates applied to arrays of qubits, i.e. by implementing a digital quantum simulator. Superconducting circuits and resonators are emerging as an extremely promising platform for quantum computation architectures, but a digital quantum simulator proposal that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is presently lacking. Here we propose a viable scheme to implement a universal quantum simulator with hybrid spin-photon qubits in an array of superconducting resonators, which is intrinsically scalable and allows for local control. As representative examples we consider the transverse-field Ising model, a spin-1 Hamiltonian, and the two-dimensional Hubbard model and we numerically simulate the scheme by including the main sources of decoherence.

  12. Geometry of Quantum Computation with Qudits

    PubMed Central

    Luo, Ming-Xing; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-01-01

    The circuit complexity of quantum qubit system evolution as a primitive problem in quantum computation has been discussed widely. We investigate this problem in terms of qudit system. Using the Riemannian geometry the optimal quantum circuits are equivalent to the geodetic evolutions in specially curved parametrization of SU(dn). And the quantum circuit complexity is explicitly dependent of controllable approximation error bound. PMID:24509710

  13. Determining Ramsey numbers on a quantum computer

    NASA Astrophysics Data System (ADS)

    Wang, Hefeng

    2016-03-01

    We present a quantum algorithm for computing the Ramsey numbers whose computational complexity grows superexponentially with the number of vertices of a graph on a classical computer. The problem is mapped to a decision problem on a quantum computer, and a probe qubit is coupled to a register that represents the problem and detects the energy levels of the problem Hamiltonian. The decision problem is solved by detecting the decay dynamics of the probe qubit.

  14. Multithreaded processor architecture for parallel symbolic computation. Technical report

    SciTech Connect

    Fujita, T.

    1987-09-01

    This paper describes the Multilisp Architecture for Symbolic Applications (MASA), which is a multithreaded processor architecture for parallel symbolic computation with various features intended for effective Multilisp program execution. The principal mechanisms exploited for this processor are multiple contexts, interleaved pipeline execution from separate instruction streams, and synchronization based on a bit in each memory cell. The tagged architecture approach is taken for Lisp program execution, and trap conditions are provided for future object manipulation and garbage collection.

  15. Pipeline and parallel architectures for computer communication systems

    SciTech Connect

    Reddi, A.V.

    1983-01-01

    Various existing communication precessor systems (CPSS) at different nodes in computer communication systems (CCSS) are reviewed for distributed processing systems. To meet the increasing load of messages, pipeline and parallel architectures are suggested in CPSS. Finally, pipeline, array, multi and multiple-processor architectures and their advantages in CPSS for CCSS are presented and analysed, and their performances are compared with the performance of uniprocessor architecture. 19 references.

  16. Quantum computation mediated by ancillary qudits and spin coherent states

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Dooley, Shane; Kendon, Viv

    2015-01-01

    Models of universal quantum computation in which the required interactions between register (computational) qubits are mediated by some ancillary system are highly relevant to experimental realizations of a quantum computer. We introduce such a universal model that employs a d -dimensional ancillary qudit. The ancilla-register interactions take the form of controlled displacements operators, with a displacement operator defined on the periodic and discrete lattice phase space of a qudit. We show that these interactions can implement controlled phase gates on the register by utilizing geometric phases that are created when closed loops are traversed in this phase space. The extra degrees of freedom of the ancilla can be harnessed to reduce the number of operations required for certain gate sequences. In particular, we see that the computational advantages of the quantum bus (qubus) architecture, which employs a field-mode ancilla, are also applicable to this model. We then explore an alternative ancilla-mediated model which employs a spin ensemble as the ancillary system and again the interactions with the register qubits are via controlled displacement operators, with a displacement operator defined on the Bloch sphere phase space of the spin coherent states of the ensemble. We discuss the computational advantages of this model and its relationship with the qubus architecture.

  17. Experimental demonstration of deterministic one-way quantum computation on a NMR quantum computer

    SciTech Connect

    Ju, Chenyong; Zhu Jing; Peng Xinhua; Chong Bo; Zhou Xianyi; Du Jiangfeng

    2010-01-15

    One-way quantum computing is an important and novel approach to quantum computation. By exploiting the existing particle-particle interactions, we report an experimental realization of the complete process of deterministic one-way quantum Deutsch-Josza algorithm in NMR, including graph state preparation, single-qubit measurements, and feed-forward corrections. The findings in our experiment may shed light on the future scalable one-way quantum computation.

  18. Universal Matchgate Quantum Computing With Cold Polar Molecules

    NASA Astrophysics Data System (ADS)

    Herrera, Felipe

    2015-03-01

    Polar molecules in optical lattices are attractive for quantum simulation and computation due to the ability to implement a variety of spin-lattice models using static, microwave and optical fields to engineer the long-range dipolar interaction between molecular qubits. Quantum simulation of spin models requires global control over the molecular ensemble, while quantum computation requires control of individual molecules with sub-wavelength resolution. In this talk, we describe the implementation of a matchgate quantum processor with an ensemble of polar molecules in an optical lattice. The scheme uses few-body qubit encoding and sequential control of two-body dipolar interactions over small plaquetes on a square lattice to perform universal quantum computing without single-site addressing. Effective spin-spin interactions with matchgate symmetry between open-shell polar molecules (e.g., SrF, OH) are driven by two infrared control pulses in the absence of static electric fields. The resulting matchgates are robust with respect to realistic imperfections in the driving fields and lattice trapping. Applications of the architecture for the simulation of interacting fermions in quantum chemistry are discussed, considering an imperfect lattice filling.

  19. A heterogeneous hierarchical architecture for real-time computing

    SciTech Connect

    Skroch, D.A.; Fornaro, R.J.

    1988-12-01

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  20. A Simple Physical Optics Algorithm Perfect for Parallel Computing Architecture

    NASA Technical Reports Server (NTRS)

    Imbriale, W. A.; Cwik, T.

    1994-01-01

    A reflector antenna computer program based upon a simple discreet approximation of the radiation integral has proven to be extremely easy to adapt to the parallel computing architecture of the modest number of large-gain computing elements such as are used in the Intel iPSC and Touchstone Delta parallel machines.

  1. Graph isomorphism and adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Gaitan, Frank; Clark, Lane

    2014-02-01

    In the graph isomorphism (GI) problem two N-vertex graphs G and G' are given and the task is to determine whether there exists a permutation of the vertices of G that preserves adjacency and transforms G →G'. If yes, then G and G' are said to be isomorphic; otherwise they are nonisomorphic. The GI problem is an important problem in computer science and is thought to be of comparable difficulty to integer factorization. In this paper we present a quantum algorithm that solves arbitrary instances of GI and which also provides an approach to determining all automorphisms of a given graph. We show how the GI problem can be converted to a combinatorial optimization problem that can be solved using adiabatic quantum evolution. We numerically simulate the algorithm's quantum dynamics and show that it correctly (i) distinguishes nonisomorphic graphs; (ii) recognizes isomorphic graphs and determines the permutation(s) that connect them; and (iii) finds the automorphism group of a given graph G. We then discuss the GI quantum algorithm's experimental implementation, and close by showing how it can be leveraged to give a quantum algorithm that solves arbitrary instances of the NP-complete subgraph isomorphism problem. The computational complexity of an adiabatic quantum algorithm is largely determined by the minimum energy gap Δ (N) separating the ground and first-excited states in the limit of large problem size N ≫1. Calculating Δ (N) in this limit is a fundamental open problem in adiabatic quantum computing, and so it is not possible to determine the computational complexity of adiabatic quantum algorithms in general, nor consequently, of the specific adiabatic quantum algorithms presented here. Adiabatic quantum computing has been shown to be equivalent to the circuit model of quantum computing, and so development of adiabatic quantum algorithms continues to be of great interest.

  2. Universal quantum computation with weakly integral anyons

    NASA Astrophysics Data System (ADS)

    Cui, Shawn X.; Hong, Seung-Moon; Wang, Zhenghan

    2015-08-01

    Harnessing non-abelian statistics of anyons to perform quantum computational tasks is getting closer to reality. While the existence of universal anyons by braiding alone such as the Fibonacci anyon is theoretically a possibility, accessible anyons with current technology all belong to a class that is called weakly integral—anyons whose squared quantum dimensions are integers. We analyze the computational power of the first non-abelian anyon system with only integral quantum dimensions—, the quantum double of . Since all anyons in have finite images of braid group representations, they cannot be universal for quantum computation by braiding alone. Based on our knowledge of the images of the braid group representations, we set up three qutrit computational models. Supplementing braidings with some measurements and ancillary states, we find a universal gate set for each model.

  3. Video Encryption and Decryption on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Iliyasu, Abdullah M.; Venegas-Andraca, Salvador E.; Yang, Huamin

    2015-08-01

    A method for video encryption and decryption on quantum computers is proposed based on color information transformations on each frame encoding the content of the encoding the content of the video. The proposed method provides a flexible operation to encrypt quantum video by means of the quantum measurement in order to enhance the security of the video. To validate the proposed approach, a tetris tile-matching puzzle game video is utilized in the experimental simulations. The results obtained suggest that the proposed method enhances the security and speed of quantum video encryption and decryption, both properties required for secure transmission and sharing of video content in quantum communication.

  4. Numerical computation for teaching quantum statistics

    NASA Astrophysics Data System (ADS)

    Price, Tyson; Swendsen, Robert H.

    2013-11-01

    The study of ideal quantum gases reveals surprising quantum effects that can be observed in macroscopic systems. The properties of bosons are particularly unusual because a macroscopic number of particles can occupy a single quantum state. We describe a computational approach that supplements the usual analytic derivations applicable in the thermodynamic limit. The approach involves directly summing over the quantum states for finite systems and avoids the need for doing difficult integrals. The results display the unusual behavior of quantum gases even for relatively small systems.

  5. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  6. Integrated computer control system architectural overview

    SciTech Connect

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  7. Quantum Computation Using Optically Coupled Quantum Dot Arrays

    NASA Technical Reports Server (NTRS)

    Pradhan, Prabhakar; Anantram, M. P.; Wang, K. L.; Roychowhury, V. P.; Saini, Subhash (Technical Monitor)

    1998-01-01

    A solid state model for quantum computation has potential advantages in terms of the ease of fabrication, characterization, and integration. The fundamental requirements for a quantum computer involve the realization of basic processing units (qubits), and a scheme for controlled switching and coupling among the qubits, which enables one to perform controlled operations on qubits. We propose a model for quantum computation based on optically coupled quantum dot arrays, which is computationally similar to the atomic model proposed by Cirac and Zoller. In this model, individual qubits are comprised of two coupled quantum dots, and an array of these basic units is placed in an optical cavity. Switching among the states of the individual units is done by controlled laser pulses via near field interaction using the NSOM technology. Controlled rotations involving two or more qubits are performed via common cavity mode photon. We have calculated critical times, including the spontaneous emission and switching times, and show that they are comparable to the best times projected for other proposed models of quantum computation. We have also shown the feasibility of accessing individual quantum dots using the NSOM technology by calculating the photon density at the tip, and estimating the power necessary to perform the basic controlled operations. We are currently in the process of estimating the decoherence times for this system; however, we have formulated initial arguments which seem to indicate that the decoherence times will be comparable, if not longer, than many other proposed models.

  8. Rate-loss analysis of an efficient quantum repeater architecture

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Krovi, Hari; Fuchs, Christopher A.; Dutton, Zachary; Slater, Joshua A.; Simon, Christoph; Tittel, Wolfgang

    2015-08-01

    We analyze an entanglement-based quantum key distribution (QKD) architecture that uses a linear chain of quantum repeaters employing photon-pair sources, spectral-multiplexing, linear-optic Bell-state measurements, multimode quantum memories, and classical-only error correction. Assuming perfect sources, we find an exact expression for the secret-key rate, and an analytical description of how errors propagate through the repeater chain, as a function of various loss-and-noise parameters of the devices. We show via an explicit analytical calculation, which separately addresses the effects of the principle nonidealities, that this scheme achieves a secret-key rate that surpasses the Takeoka-Guha-Wilde bound—a recently found fundamental limit to the rate-vs-loss scaling achievable by any QKD protocol over a direct optical link—thereby providing one of the first rigorous proofs of the efficacy of a repeater protocol. We explicitly calculate the end-to-end shared noisy quantum state generated by the repeater chain, which could be useful for analyzing the performance of other non-QKD quantum protocols that require establishing long-distance entanglement. We evaluate that shared state's fidelity and the achievable entanglement-distillation rate, as a function of the number of repeater nodes, total range, and various loss-and-noise parameters of the system. We extend our theoretical analysis to encompass sources with nonzero two-pair-emission probability, using an efficient exact numerical evaluation of the quantum state propagation and measurements. We expect our results to spur formal rate-loss analysis of other repeater protocols and also to provide useful abstractions to seed analyses of quantum networks of complex topologies.

  9. A scalable quantum computer with ions in an array of microtraps

    PubMed

    Cirac; Zoller

    2000-04-01

    Quantum computers require the storage of quantum information in a set of two-level systems (called qubits), the processing of this information using quantum gates and a means of final readout. So far, only a few systems have been identified as potentially viable quantum computer models--accurate quantum control of the coherent evolution is required in order to realize gate operations, while at the same time decoherence must be avoided. Examples include quantum optical systems (such as those utilizing trapped ions or neutral atoms, cavity quantum electrodynamics and nuclear magnetic resonance) and solid state systems (using nuclear spins, quantum dots and Josephson junctions). The most advanced candidates are the quantum optical and nuclear magnetic resonance systems, and we expect that they will allow quantum computing with about ten qubits within the next few years. This is still far from the numbers required for useful applications: for example, the factorization of a 200-digit number requires about 3,500 qubits, rising to 100,000 if error correction is implemented. Scalability of proposed quantum computer architectures to many qubits is thus of central importance. Here we propose a model for an ion trap quantum computer that combines scalability (a feature usually associated with solid state proposals) with the advantages of quantum optical systems (in particular, quantum control and long decoherence times). PMID:10766235

  10. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-07-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  11. Quantum computer simulation using the CUDA programming model

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Eladio; Romero, Sergio; Trenas, María A.; Zapata, Emilio L.

    2010-02-01

    Quantum computing emerges as a field that captures a great theoretical interest. Its simulation represents a problem with high memory and computational requirements which makes advisable the use of parallel platforms. In this work we deal with the simulation of an ideal quantum computer on the Compute Unified Device Architecture (CUDA), as such a problem can benefit from the high computational capacities of Graphics Processing Units (GPU). After all, modern GPUs are becoming very powerful computational architectures which is causing a growing interest in their application for general purpose. CUDA provides an execution model oriented towards a more general exploitation of the GPU allowing to use it as a massively parallel SIMT (Single-Instruction Multiple-Thread) multiprocessor. A simulator that takes into account memory reference locality issues is proposed, showing that the challenge of achieving a high performance depends strongly on the explicit exploitation of memory hierarchy. Several strategies have been experimentally evaluated obtaining good performance results in comparison with conventional platforms.

  12. Conceptual aspects of geometric quantum computation

    NASA Astrophysics Data System (ADS)

    Sjöqvist, Erik; Azimi Mousolou, Vahid; Canali, Carlo M.

    2016-07-01

    Geometric quantum computation is the idea that geometric phases can be used to implement quantum gates, i.e., the basic elements of the Boolean network that forms a quantum computer. Although originally thought to be limited to adiabatic evolution, controlled by slowly changing parameters, this form of quantum computation can as well be realized at high speed by using nonadiabatic schemes. Recent advances in quantum gate technology have allowed for experimental demonstrations of different types of geometric gates in adiabatic and nonadiabatic evolution. Here, we address some conceptual issues that arise in the realizations of geometric gates. We examine the appearance of dynamical phases in quantum evolution and point out that not all dynamical phases need to be compensated for in geometric quantum computation. We delineate the relation between Abelian and non-Abelian geometric gates and find an explicit physical example where the two types of gates coincide. We identify differences and similarities between adiabatic and nonadiabatic realizations of quantum computation based on non-Abelian geometric phases.

  13. Effective pure states for bulk quantum computation

    SciTech Connect

    Knill, E.; Chuang, I.; Laflamme, R.

    1997-11-01

    In bulk quantum computation one can manipulate a large number of indistinguishable quantum computers by parallel unitary operations and measure expectation values of certain observables with limited sensitivity. The initial state of each computer in the ensemble is known but not pure. Methods for obtaining effective pure input states by a series of manipulations have been described by Gershenfeld and Chuang (logical labeling) and Corey et al. (spatial averaging) for the case of quantum computation with nuclear magnetic resonance. We give a different technique called temporal averaging. This method is based on classical randomization, requires no ancilla qubits and can be implemented in nuclear magnetic resonance without using gradient fields. We introduce several temporal averaging algorithms suitable for both high temperature and low temperature bulk quantum computing and analyze the signal to noise behavior of each.

  14. Magnetic resonance force microscopy and a solid state quantum computer.

    SciTech Connect

    Pelekhov, D. V.; Martin, I.; Suter, A.; Reagor, D. W.; Hammel, P. C.

    2001-01-01

    A Quantum Computer (QC) is a device that utilizes the principles of Quantum Mechanics to perform computations. Such a machine would be capable of accomplishing tasks not achievable by means of any conventional digital computer, for instance factoring large numbers. Currently it appears that the QC architecture based on an array of spin quantum bits (qubits) embedded in a solid-state matrix is one of the most promising approaches to fabrication of a scalable QC. However, the fabrication and operation of a Solid State Quantum Computer (SSQC) presents very formidable challenges; primary amongst these are: (1) the characterization and control of the fabrication process of the device during its construction and (2) the readout of the computational result. Magnetic Resonance Force Microscopy (MRFM)--a novel scanning probe technique based on mechanical detection of magnetic resonance-provides an attractive means of addressing these requirements. The sensitivity of the MRFM significantly exceeds that of conventional magnetic resonance measurement methods, and it has the potential for single electron spin detection. Moreover, the MRFM is capable of true 3D subsurface imaging. These features will make MRFM an invaluable tool for the implementation of a spin-based QC. Here we present the general principles of MRFM operation, the current status of its development and indicate future directions for its improvement.

  15. Expandable computed-tomography architecture for nondestructive inspection

    NASA Astrophysics Data System (ADS)

    Agi, Iskender; Hurst, Paul J.; Current, K. W.

    1993-04-01

    The Radon transform and its inverse, commonly used for computed tomography (CT), are computationally burdensome for single processor computers. Since projection-based computations are easily executed in parallel, multiprocessor architectures have been proposed for high-speed operation. In this paper, we describe an architecture for a high-speed (30 MHz raster-scan image data rate), high accuracy (12-bits per pixel) computed-tomography system for use in non-destructive inspection system. This architecture reconstructs images from fan- or parallel-beam data using either single-pass or iterative reconstruction techniques. Our architecture uses a number of identical processor modules in a pipeline. Each processor module consists of memory for data storage, a commercially available digital signal processing (DSP) chip for filtering, and our custom IC which performs 450 million mathematical operations per second (MOPS). This architecture can reconstruct CT images as large as 1024 X 1024 pixels from a variety of image reconstruction algorithms. The details of the implementation and performance of our expandable architecture are discussed.

  16. Concatenated codes for fault tolerant quantum computing

    SciTech Connect

    Knill, E.; Laflamme, R.; Zurek, W.

    1995-05-01

    The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

  17. Hyper-parallel photonic quantum computation with coupled quantum dots

    NASA Astrophysics Data System (ADS)

    Ren, Bao-Cang; Deng, Fu-Guo

    2014-04-01

    It is well known that a parallel quantum computer is more powerful than a classical one. So far, there are some important works about the construction of universal quantum logic gates, the key elements in quantum computation. However, they are focused on operating on one degree of freedom (DOF) of quantum systems. Here, we investigate the possibility of achieving scalable hyper-parallel quantum computation based on two DOFs of photon systems. We construct a deterministic hyper-controlled-not (hyper-CNOT) gate operating on both the spatial-mode and the polarization DOFs of a two-photon system simultaneously, by exploiting the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics (QED). This hyper-CNOT gate is implemented by manipulating the four qubits in the two DOFs of a two-photon system without auxiliary spatial modes or polarization modes. It reduces the operation time and the resources consumed in quantum information processing, and it is more robust against the photonic dissipation noise, compared with the integration of several cascaded CNOT gates in one DOF.

  18. A computational architecture for social agents

    SciTech Connect

    Bond, A.H.

    1996-12-31

    This article describes a new class of information-processing models for social agents. They axe derived from primate brain architecture, the processing in brain regions, the interactions among brain regions, and the social behavior of primates. In another paper, we have reviewed the neuroanatomical connections and functional involvements of cortical regions. We reviewed the evidence for a hierarchical architecture in the primate brain. By examining neuroanatomical evidence for connections among neural areas, we were able to establish anatomical regions and connections. We then examined evidence for specific functional involvements of the different neural axeas and found some support for hierarchical functioning, not only for the perception hierarchies but also for the planning and action hierarchy in the frontal lobes.

  19. Faster quantum chemistry simulation on fault-tolerant quantum computers

    NASA Astrophysics Data System (ADS)

    Cody Jones, N.; Whitfield, James D.; McMahon, Peter L.; Yung, Man-Hong; Van Meter, Rodney; Aspuru-Guzik, Alán; Yamamoto, Yoshihisa

    2012-11-01

    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. We propose methods which substantially improve the performance of a particular form of simulation, ab initio quantum chemistry, on fault-tolerant quantum computers; these methods generalize readily to other quantum simulation problems. Quantum teleportation plays a key role in these improvements and is used extensively as a computing resource. To improve execution time, we examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay-Kitaev algorithm (Dawson and Nielsen 2006 Quantum Inform. Comput. 6 81). For a given approximation error ɛ, arbitrary single-qubit gates can be produced fault-tolerantly and using a restricted set of gates in time which is O(log ɛ) or O(log log ɛ) with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for lithium hydride.

  20. Materials Frontiers to Empower Quantum Computing

    SciTech Connect

    Taylor, Antoinette Jane; Sarrao, John Louis; Richardson, Christopher

    2015-06-11

    This is an exciting time at the nexus of quantum computing and materials research. The materials frontiers described in this report represent a significant advance in electronic materials and our understanding of the interactions between the local material and a manufactured quantum state. Simultaneously, directed efforts to solve materials issues related to quantum computing provide an opportunity to control and probe the fundamental arrangement of matter that will impact all electronic materials. An opportunity exists to extend our understanding of materials functionality from electronic-grade to quantum-grade by achieving a predictive understanding of noise and decoherence in qubits and their origins in materials defects and environmental coupling. Realizing this vision systematically and predictively will be transformative for quantum computing and will represent a qualitative step forward in materials prediction and control.

  1. Superadiabatic Controlled Evolutions and Universal Quantum Computation

    PubMed Central

    Santos, Alan C.; Sarandy, Marcelo S.

    2015-01-01

    Adiabatic state engineering is a powerful technique in quantum information and quantum control. However, its performance is limited by the adiabatic theorem of quantum mechanics. In this scenario, shortcuts to adiabaticity, such as provided by the superadiabatic theory, constitute a valuable tool to speed up the adiabatic quantum behavior. Here, we propose a superadiabatic route to implement universal quantum computation. Our method is based on the realization of piecewise controlled superadiabatic evolutions. Remarkably, they can be obtained by simple time-independent counter-diabatic Hamiltonians. In particular, we discuss the implementation of fast rotation gates and arbitrary n-qubit controlled gates, which can be used to design different sets of universal quantum gates. Concerning the energy cost of the superadiabatic implementation, we show that it is dictated by the quantum speed limit, providing an upper bound for the corresponding adiabatic counterparts. PMID:26511064

  2. Superadiabatic Controlled Evolutions and Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Santos, Alan C.; Sarandy, Marcelo S.

    2015-10-01

    Adiabatic state engineering is a powerful technique in quantum information and quantum control. However, its performance is limited by the adiabatic theorem of quantum mechanics. In this scenario, shortcuts to adiabaticity, such as provided by the superadiabatic theory, constitute a valuable tool to speed up the adiabatic quantum behavior. Here, we propose a superadiabatic route to implement universal quantum computation. Our method is based on the realization of piecewise controlled superadiabatic evolutions. Remarkably, they can be obtained by simple time-independent counter-diabatic Hamiltonians. In particular, we discuss the implementation of fast rotation gates and arbitrary n-qubit controlled gates, which can be used to design different sets of universal quantum gates. Concerning the energy cost of the superadiabatic implementation, we show that it is dictated by the quantum speed limit, providing an upper bound for the corresponding adiabatic counterparts.

  3. Reducing computational complexity of quantum correlations

    NASA Astrophysics Data System (ADS)

    Chanda, Titas; Das, Tamoghna; Sadhukhan, Debasis; Pal, Amit Kumar; SenDe, Aditi; Sen, Ujjwal

    2015-12-01

    We address the issue of reducing the resource required to compute information-theoretic quantum correlation measures such as quantum discord and quantum work deficit in two qubits and higher-dimensional systems. We show that determination of the quantum correlation measure is possible even if we utilize a restricted set of local measurements. We find that the determination allows us to obtain a closed form of quantum discord and quantum work deficit for several classes of states, with a low error. We show that the computational error caused by the constraint over the complete set of local measurements reduces fast with an increase in the size of the restricted set, implying usefulness of constrained optimization, especially with the increase of dimensions. We perform quantitative analysis to investigate how the error scales with the system size, taking into account a set of plausible constructions of the constrained set. Carrying out a comparative study, we show that the resource required to optimize quantum work deficit is usually higher than that required for quantum discord. We also demonstrate that minimization of quantum discord and quantum work deficit is easier in the case of two-qubit mixed states of fixed ranks and with positive partial transpose in comparison to the corresponding states having nonpositive partial transpose. Applying the methodology to quantum spin models, we show that the constrained optimization can be used with advantage in analyzing such systems in quantum information-theoretic language. For bound entangled states, we show that the error is significantly low when the measurements correspond to the spin observables along the three Cartesian coordinates, and thereby we obtain expressions of quantum discord and quantum work deficit for these bound entangled states.

  4. Is the Brain a Quantum Computer?

    ERIC Educational Resources Information Center

    Litt, Abninder; Eliasmith, Chris; Kroon, Frederick W.; Weinstein, Steven; Thagard, Paul

    2006-01-01

    We argue that computation via quantum mechanical processes is irrelevant to explaining how brains produce thought, contrary to the ongoing speculations of many theorists. First, quantum effects do not have the temporal properties required for neural information processing. Second, there are substantial physical obstacles to any organic…

  5. Iterated Gate Teleportation and Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Pérez-Delgado, Carlos A.; Fitzsimons, Joseph F.

    2015-06-01

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  6. Iterated Gate Teleportation and Blind Quantum Computation.

    PubMed

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-01

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements. PMID:26196609

  7. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  8. Middleware in Modern High Performance Computing System Architectures

    SciTech Connect

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2007-01-01

    A recent trend in modern high performance computing (HPC) system architectures employs ''lean'' compute nodes running a lightweight operating system (OS). Certain parts of the OS as well as other system software services are moved to service nodes in order to increase performance and scalability. This paper examines the impact of this HPC system architecture trend on HPC ''middleware'' software solutions, which traditionally equip HPC systems with advanced features, such as parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and user interaction techniques. Since the approach of keeping the compute node software stack small and simple is orthogonal to the middleware concept of adding missing OS features between OS and application, the role and architecture of middleware in modern HPC systems needs to be revisited. The result is a paradigm shift in HPC middleware design, where single middleware services are moved to service nodes, while runtime environments (RTEs) continue to reside on compute nodes.

  9. Computer Architecture. (Latest Citations from the Aerospace Database)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The bibliography contains citations concerning research and development in the field of computer architecture. Design of computer systems, microcomputer components, and digital networks are among the topics discussed. Multimicroprocessor system performance, software development, and aerospace avionics applications are also included. (Contains 50-250 citations and includes a subject term index and title list.)

  10. The Contribution of Visualization to Learning Computer Architecture

    ERIC Educational Resources Information Center

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  11. Architecture and applications of the HEP multiprocessor computer system

    SciTech Connect

    Smith, B.J.; Fink, D.J.

    1982-01-01

    The HEP computer system is a large scale scientific parallel computer employing shared resource MIMD architecture. The hardware and software facilities provided by the system are described, and techniques found to be useful in programming the system are also discussed. 3 references.

  12. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  13. Braid group representation on quantum computation

    SciTech Connect

    Aziz, Ryan Kasyfil; Muchtadi-Alamsyah, Intan

    2015-09-30

    There are many studies about topological representation of quantum computation recently. One of diagram representation of quantum computation is by using ZX-Calculus. In this paper we will make a diagrammatical scheme of Dense Coding. We also proved that ZX-Calculus diagram of maximally entangle state satisfies Yang-Baxter Equation and therefore, we can construct a Braid Group representation of set of maximally entangle state.

  14. Delayed commutation in quantum computer networks.

    PubMed

    García-Escartín, Juan Carlos; Chamorro-Posada, Pedro

    2006-09-15

    In the same way that classical computer networks connect and enhance the capabilities of classical computers, quantum networks can combine the advantages of quantum information and communication. We propose a nonclassical network element, a delayed commutation switch, that can solve the problem of switching time in packet switching networks. With the help of some local ancillary qubits and superdense codes, we can route a qubit packet after part of it has left the network node. PMID:17025870

  15. A memory-array architecture for computer vision

    SciTech Connect

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  16. Acausal measurement-based quantum computing

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2014-07-01

    In measurement-based quantum computing, there is a natural "causal cone" among qubits of the resource state, since the measurement angle on a qubit has to depend on previous measurement results in order to correct the effect of by-product operators. If we respect the no-signaling principle, by-product operators cannot be avoided. Here we study the possibility of acausal measurement-based quantum computing by using the process matrix framework [Oreshkov, Costa, and Brukner, Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076]. We construct a resource process matrix for acausal measurement-based quantum computing restricting local operations to projective measurements. The resource process matrix is an analog of the resource state of the standard causal measurement-based quantum computing. We find that if we restrict local operations to projective measurements the resource process matrix is (up to a normalization factor and trivial ancilla qubits) equivalent to the decorated graph state created from the graph state of the corresponding causal measurement-based quantum computing. We also show that it is possible to consider a causal game whose causal inequality is violated by acausal measurement-based quantum computing.

  17. Architecture independent environment for developing engineering software on MIMD computers

    NASA Technical Reports Server (NTRS)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  18. Thrifty: An Exascale Architecture for Energy Proportional Computing

    SciTech Connect

    Torrellas, Josep

    2014-12-23

    The objective of this project is to design key aspects of an exascale architecture called Thrifty that addresses the challenges of power/energy efficiency, resiliency, and performance in exascale systems. The project includes work on computer architecture (Josep Torrellas from University of Illinois), compilation (Daniel Quinlan from Lawrence Livermore National Laboratory), runtime and applications (Laura Carrington from University of California San Diego), and circuits (Wilfred Pinfold from Intel Corporation).

  19. Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.

    2011-01-01

    A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.

  20. Generating Efficient Quantum Chemistry Codes for Novel Architectures.

    PubMed

    Titov, Alexey V; Ufimtsev, Ivan S; Luehr, Nathan; Martinez, Todd J

    2013-01-01

    We describe an extension of our graphics processing unit (GPU) electronic structure program TeraChem to include atom-centered Gaussian basis sets with d angular momentum functions. This was made possible by a "meta-programming" strategy that leverages computer algebra systems for the derivation of equations and their transformation to correct code. We generate a multitude of code fragments that are formally mathematically equivalent, but differ in their memory and floating-point operation footprints. We then select between different code fragments using empirical testing to find the highest performing code variant. This leads to an optimal balance of floating-point operations and memory bandwidth for a given target architecture without laborious manual tuning. We show that this approach is capable of similar performance compared to our hand-tuned GPU kernels for basis sets with s and p angular momenta. We also demonstrate that mixed precision schemes (using both single and double precision) remain stable and accurate for molecules with d functions. We provide benchmarks of the execution time of entire self-consistent field (SCF) calculations using our GPU code and compare to mature CPU based codes, showing the benefits of the GPU architecture for electronic structure theory with appropriately redesigned algorithms. We suggest that the meta-programming and empirical performance optimization approach may be important in future computational chemistry applications, especially in the face of quickly evolving computer architectures. PMID:26589024

  1. Waveguide-QED-Based Photonic Quantum Computation

    NASA Astrophysics Data System (ADS)

    Zheng, Huaixiu; Gauthier, Daniel J.; Baranger, Harold U.

    2013-08-01

    We propose a new scheme for quantum computation using flying qubits—propagating photons in a one-dimensional waveguide interacting with matter qubits. Photon-photon interactions are mediated by the coupling to a four-level system, based on which photon-photon π-phase gates (controlled-not) can be implemented for universal quantum computation. We show that high gate fidelity is possible, given recent dramatic experimental progress in superconducting circuits and photonic-crystal waveguides. The proposed system can be an important building block for future on-chip quantum networks.

  2. Hamiltonian quantum computer in one dimension

    NASA Astrophysics Data System (ADS)

    Wei, Tzu-Chieh; Liang, John C.

    2015-12-01

    Quantum computation can be achieved by preparing an appropriate initial product state of qudits and then letting it evolve under a fixed Hamiltonian. The readout is made by measurement on individual qudits at some later time. This approach is called the Hamiltonian quantum computation and it includes, for example, the continuous-time quantum cellular automata and the universal quantum walk. We consider one spatial dimension and study the compromise between the locality k and the local Hilbert space dimension d . For geometrically 2-local (i.e., k =2 ), it is known that d =8 is already sufficient for universal quantum computation but the Hamiltonian is not translationally invariant. As the locality k increases, it is expected that the minimum required d should decrease. We provide a construction of a Hamiltonian quantum computer for k =3 with d =5 . One implication is that simulating one-dimensional chains of spin-2 particles is BQP-complete (BQP denotes "bounded error, quantum polynomial time"). Imposing translation invariance will increase the required d . For this we also construct another 3-local (k =3 ) Hamiltonian that is invariant under translation of a unit cell of two sites but that requires d to be 8.

  3. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  4. Simulating physical phenomena with a quantum computer

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo

    2003-03-01

    In a keynote speech at MIT in 1981 Richard Feynman raised some provocative questions in connection to the exact simulation of physical systems using a special device named a ``quantum computer'' (QC). At the time it was known that deterministic simulations of quantum phenomena in classical computers required a number of resources that scaled exponentially with the number of degrees of freedom, and also that the probabilistic simulation of certain quantum problems were limited by the so-called sign or phase problem, a problem believed to be of exponential complexity. Such a QC was intended to mimick physical processes exactly the same as Nature. Certainly, remarks coming from such an influential figure generated widespread interest in these ideas, and today after 21 years there are still some open questions. What kind of physical phenomena can be simulated with a QC?, How?, and What are its limitations? Addressing and attempting to answer these questions is what this talk is about. Definitively, the goal of physics simulation using controllable quantum systems (``physics imitation'') is to exploit quantum laws to advantage, and thus accomplish efficient imitation. Fundamental is the connection between a quantum computational model and a physical system by transformations of operator algebras. This concept is a necessary one because in Quantum Mechanics each physical system is naturally associated with a language of operators and thus can be considered as a possible model of quantum computation. The remarkable result is that an arbitrary physical system is naturally simulatable by another physical system (or QC) whenever a ``dictionary'' between the two operator algebras exists. I will explain these concepts and address some of Feynman's concerns regarding the simulation of fermionic systems. Finally, I will illustrate the main ideas by imitating simple physical phenomena borrowed from condensed matter physics using quantum algorithms, and present experimental

  5. CALL FOR PAPERS: Optical implementation of quantum computers

    NASA Astrophysics Data System (ADS)

    Rarity, John; Weinfurter, Harald

    2004-09-01

    A topical issue of Journal of Optics B: Quantum and Semiclassical Optics will be devoted to recent advances in optical implementation of quantum computers. The topics to be covered will include, but are not limited to: bullet Linear optics quantum gates bullet Progress towards nonlinear optics quantum gates bullet Interface between optical qubits and atomic/solid state qubits bullet Novel architectures bullet Single-photon sources and detectors bullet Photonic quantum networks bullet Few-qubit applications The DEADLINE for submission of contributions is 15 January 2005 to allow the topical issue to be published in about October 2005. All contributions will be peer-reviewed in accordance with the normal refereeing procedures and standards of Journal of Optics B: Quantum and Semiclassical Optics. Submissions should preferably be in either standard LaTeX form or Microsoft Word. Advice on publishing your work in the journal may be found at www.iop.org/journals/authors/jopb. There are no page charges for publication. The corresponding author of each paper published will receive a complimentary copy of the topical issue. Contributions to the topical issue should preferably be submitted electronically at www.iop.org/journals/authors/jopb or by e-mail to jopb@iop.org. Authors unable to submit online or by e-mail may send hard copy contributions (enclosing the electronic code) to: Dr Claire Bedrock (Publisher), Journal of Optics B: Quantum and Semiclassical Optics, Institute of Physics Publishing, Dirac House, Temple Back, Bristol BS1 6BE, UK. All contributions should be accompanied by a readme file or covering letter, quoting `JOPB Topical Issue - Optical implementation of quantum computers', giving the postal and e-mail addresses for correspondence. Any subsequent change of address should be notified to the publishing office. We look forward to receiving your contribution to this topical issue.

  6. Prospects for quantum computing: Extremely doubtful

    NASA Astrophysics Data System (ADS)

    Dyakonov, M. I.

    2014-09-01

    The quantum computer is supposed to process information by applying unitary transformations to 2N complex amplitudes defining the state of N qubits. A useful machine needing N 103 or more, the number of continuous parameters describing the state of a quantum computer at any given moment is at least 21000 10300 which is much greater than the number of protons in the Universe. However, the theorists believe that the feasibility of large-scale quantum computing has been proved via the “threshold theorem”. Like for any theorem, the proof is based on a number of assumptions considered as axioms. However, in the physical world none of these assumptions can be fulfilled exactly. Any assumption can be only approached with some limited precision. So, the rather meaningless “error per qubit per gate” threshold must be supplemented by a list of the precisions with which all assumptions behind the threshold theorem should hold. Such a list still does not exist. The theory also seems to ignore the undesired free evolution of the quantum computer caused by the energy differences of quantum states entering any given superposition. Another important point is that the hypothetical quantum computer will be a system of 103 -106 qubits PLUS an extremely complex and monstrously sophisticated classical apparatus. This huge and strongly nonlinear system will generally exhibit instabilities and chaotic behavior.

  7. Irreconcilable difference between quantum walks and adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Wong, Thomas G.; Meyer, David A.

    2016-06-01

    Continuous-time quantum walks and adiabatic quantum evolution are two general techniques for quantum computing, both of which are described by Hamiltonians that govern their evolutions by Schrödinger's equation. In the former, the Hamiltonian is fixed, while in the latter, the Hamiltonian varies with time. As a result, their formulations of Grover's algorithm evolve differently through Hilbert space. We show that this difference is fundamental; they cannot be made to evolve along each other's path without introducing structure more powerful than the standard oracle for unstructured search. For an adiabatic quantum evolution to evolve like the quantum walk search algorithm, it must interpolate between three fixed Hamiltonians, one of which is complex and introduces structure that is stronger than the oracle for unstructured search. Conversely, for a quantum walk to evolve along the path of the adiabatic search algorithm, it must be a chiral quantum walk on a weighted, directed star graph with structure that is also stronger than the oracle for unstructured search. Thus, the two techniques, although similar in being described by Hamiltonians that govern their evolution, compute by fundamentally irreconcilable means.

  8. LINCS: Livermore's network architecture. [Octopus computing network

    SciTech Connect

    Fletcher, J.G.

    1982-01-01

    Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessing process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.

  9. Quantum Computing in Silicon with Donor Electron Spins

    NASA Astrophysics Data System (ADS)

    Simmons, Michelle

    2014-03-01

    Extremely long electron and nuclear spin coherence times have recently been demonstrated in isotopically pure Si-28 making silicon one of the most promising semiconductor materials for spin based quantum information. The two level spin state of single electrons bound to shallow phosphorus donors in silicon in particular provide well defined, reproducible qubits and represent a promising system for a scalable quantum computer in silicon. An important challenge in these systems is the realisation of an architecture, where we can position donors within a crystalline environment with approx. 20-50nm separation, individually address each donor, manipulate the electron spins using ESR techniques and read-out their spin states. We have developed a unique fabrication strategy for a scalable quantum computer in silicon using scanning tunneling microscope hydrogen lithography to precisely position individual P donors in a Si crystal aligned with nanoscale precision to local control gates necessary to initialize, manipulate, and read-out the spin states. During this talk I will focus on demonstrating electronic transport characteristics and single-shot spin read-out of precisely-positioned P donors in Si. Additionally I will report on our recent progress in performing single spin rotations by locally applying oscillating magnetic fields and initial characterization of transport devices with two and three single donors. The challenges of scaling up to practical 2D architectures will also be discussed.

  10. Classical versus quantum errors in quantum computation of dynamical systems.

    PubMed

    Rossini, Davide; Benenti, Giuliano; Casati, Giulio

    2004-11-01

    We analyze the stability of a quantum algorithm simulating the quantum dynamics of a system with different regimes, ranging from global chaos to integrability. We compare, in these different regimes, the behavior of the fidelity of quantum motion when the system's parameters are perturbed or when there are unitary errors in the quantum gates implementing the quantum algorithm. While the first kind of errors has a classical limit, the second one has no classical analog. It is shown that, whereas in the first case ("classical errors") the decay of fidelity is very sensitive to the dynamical regime, in the second case ("quantum errors") it is almost independent of the dynamical behavior of the simulated system. Therefore, the rich variety of behaviors found in the study of the stability of quantum motion under "classical" perturbations has no correspondence in the fidelity of quantum computation under its natural perturbations. In particular, in this latter case it is not possible to recover the semiclassical regime in which the fidelity decays with a rate given by the classical Lyapunov exponent. PMID:15600737

  11. Computational architecture for integrated controls and structures design

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Park, K. C.

    1989-01-01

    To facilitate the development of control structure interaction (CSI) design methodology, a computational architecture for interdisciplinary design of active structures is presented. The emphasis of the computational procedure is to exploit existing sparse matrix structural analysis techniques, in-core data transfer with control synthesis programs, and versatility in the optimization methodology to avoid unnecessary structural or control calculations. The architecture is designed such that all required structure, control and optimization analyses are performed within one program. Hence, the optimization strategy is not unduly constrained by cold starts of existing structural analysis and control synthesis packages.

  12. Panel on future directions in parallel computer architecture

    SciTech Connect

    VanTilborg, A.M. )

    1989-06-01

    One of the program highlights of the 15th Annual International Symposium on Computer Architecture, held May 30 - June 2, 1988 in Honolulu, was a panel session on future directions in parallel computer architecture. The panel was organized and chaired by the author, and was comprised of Prof. Jack Dennis (NASA Ames Research Institute for Advanced Computer Science), Prof. H.T. Kung (Carnegie Mellon), and Dr. Burton Smith (Tera Computer Company). The objective of the panel was to identify the likely trajectory of future parallel computer system progress, particularly from the sandpoint of marketplace acceptance. Approximately 250 attendees participated in the session, in which each panelist began with a ten minute viewgraph explanation of his views, followed by an open and sometimes lively exchange with the audience and fellow panelists. The session ran for ninety minutes.

  13. Qubus ancilla-driven quantum computation

    SciTech Connect

    Brown, Katherine Louise; De, Suvabrata; Kendon, Viv; Munro, Bill

    2014-12-04

    Hybrid matter-optical systems offer a robust, scalable path to quantum computation. Such systems have an ancilla which acts as a bus connecting the qubits. We demonstrate how using a continuous variable qubus as the ancilla provides savings in the total number of operations required when computing with many qubits.

  14. Architecture and grid application of cluster computing system

    NASA Astrophysics Data System (ADS)

    Lv, Yi; Yu, Shuiqin; Mao, Youju

    2004-11-01

    Recently, people pay more attention to the grid technology. It can not only connect all kinds of resources in the network, but also put them into a super transparent computing environment for customers to realize mete-computing which can share computing resources. Traditional parallel computing system, such as SMP(Symmetrical multiprocessor) and MPP(massively parallel processor), use multi-processors to raise computing speed in a close coupling way, so the flexible and scalable performance of the system are limited, as a result of it, the system can't meet the requirement of the grid technology. In this paper, the architecture of cluster computing system applied in grid nodes is introduced. It mainly includes the following aspects. First, the network architecture of cluster computing system in grid nodes is analyzed and designed. Second, how to realize distributing computing (including coordinating computing and sharing computing) of cluster computing system in grid nodes to construct virtual node computers is discussed. Last, communication among grid nodes is analyzed. In other words, it discusses how to realize single reflection to let all the service requirements from customers be met through sending to the grid nodes.

  15. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    SciTech Connect

    Schuller, Ivan K.; Stevens, Rick; Pino, Robinson; Pechan, Michael

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  16. Biologically inspired path to quantum computer

    NASA Astrophysics Data System (ADS)

    Ogryzko, Vasily; Ozhigov, Yuri

    2014-12-01

    We describe an approach to quantum computer inspired by the information processing at the molecular level in living cells. It is based on the separation of a small ensemble of qubits inside the living system (e.g., a bacterial cell), such that coherent quantum states of this ensemble remain practically unchanged for a long time. We use the notion of a quantum kernel to describe such an ensemble. Quantum kernel is not strictly connected with certain particles; it permanently exchanges atoms and molecules with the environment, which makes quantum kernel a virtual notion. There are many reasons to expect that the state of quantum kernel of a living system can be treated as the stationary state of some Hamiltonian. While the quantum kernel is responsible for the stability of dynamics at the time scale of cellular life, at the longer inter-generation time scale it can change, varying smoothly in the course of biological evolution. To the first level of approximation, quantum kernel can be described in the framework of qubit modification of Jaynes-Cummings-Hubbard model, in which the relaxation corresponds to the exchange of matter between quantum kernel and the rest of the cell and is represented as Lindblad super-operators.

  17. Effective pure states for bulk quantum computation

    SciTech Connect

    Knill, E.; Chuang, I.; Laflamme, R.

    1998-05-01

    In bulk quantum computation one can manipulate a large number of indistinguishable quantum computers by parallel unitary operations and measure expectation values of certain observables with limited sensitivity. The initial state of each computer in the ensemble is known but not pure. Methods for obtaining effective pure input states by a series of manipulations have been described by Gershenfeld and Chuang (logical labeling) [Science {bold 275}, 350 (1997)] and Cory {ital et al.} (spatial averaging) [Proc. Natl. Acad. Sci. USA {bold 94}, 1634 (1997)] for the case of quantum computation with nuclear magnetic resonance. We give a different technique called temporal averaging. This method is based on classical randomization, requires no ancilla quantum bits, and can be implemented in nuclear magnetic resonance without using gradient fields. We introduce several temporal averaging algorithms suitable for both high-temperature and low-temperature bulk quantum computing and analyze the signal-to-noise behavior of each. Most of these algorithms require only a constant multiple of the number of experiments needed by the other methods for creating effective pure states. {copyright} {ital 1998} {ital The American Physical Society}

  18. Quantum computations: algorithms and error correction

    NASA Astrophysics Data System (ADS)

    Kitaev, A. Yu

    1997-12-01

    Contents §0. Introduction §1. Abelian problem on the stabilizer §2. Classical models of computations2.1. Boolean schemes and sequences of operations2.2. Reversible computations §3. Quantum formalism3.1. Basic notions and notation3.2. Transformations of mixed states3.3. Accuracy §4. Quantum models of computations4.1. Definitions and basic properties4.2. Construction of various operators from the elements of a basis4.3. Generalized quantum control and universal schemes §5. Measurement operators §6. Polynomial quantum algorithm for the stabilizer problem §7. Computations with perturbations: the choice of a model §8. Quantum codes (definitions and general properties)8.1. Basic notions and ideas8.2. One-to-one codes8.3. Many-to-one codes §9. Symplectic (additive) codes9.1. Algebraic preparation9.2. The basic construction9.3. Error correction procedure9.4. Torus codes §10. Error correction in the computation process: general principles10.1. Definitions and results10.2. Proofs §11. Error correction: concrete procedures11.1. The symplecto-classical case11.2. The case of a complete basis Bibliography

  19. Power of one qumode for quantum computation

    NASA Astrophysics Data System (ADS)

    Liu, Nana; Thompson, Jayne; Weedbrook, Christian; Lloyd, Seth; Vedral, Vlatko; Gu, Mile; Modi, Kavan

    2016-05-01

    Although quantum computers are capable of solving problems like factoring exponentially faster than the best-known classical algorithms, determining the resources responsible for their computational power remains unclear. An important class of problems where quantum computers possess an advantage is phase estimation, which includes applications like factoring. We introduce a computational model based on a single squeezed state resource that can perform phase estimation, which we call the power of one qumode. This model is inspired by an interesting computational model known as deterministic quantum computing with one quantum bit (DQC1). Using the power of one qumode, we identify that the amount of squeezing is sufficient to quantify the resource requirements of different computational problems based on phase estimation. In particular, we can use the amount of squeezing to quantitatively relate the resource requirements of DQC1 and factoring. Furthermore, we can connect the squeezing to other known resources like precision, energy, qudit dimensionality, and qubit number. We show the circumstances under which they can likewise be considered good resources.

  20. Universal quantum computation with unlabelled qubits

    NASA Astrophysics Data System (ADS)

    Severini, Simone

    2006-06-01

    We show that an nth root of the Walsh-Hadamard transform (obtained from the Hadamard gate and a cyclic permutation of the qubits), together with two diagonal matrices, namely a local qubit-flip (for a fixed but arbitrary qubit) and a non-local phase-flip (for a fixed but arbitrary coefficient), can do universal quantum computation on n qubits. A quantum computation, making use of n qubits and based on these operations, is then a word of variable length, but whose letters are always taken from an alphabet of cardinality three. Therefore, in contrast with other universal sets, no choice of qubit lines is needed for the application of the operations described here. A quantum algorithm based on this set can be interpreted as a discrete diffusion of a quantum particle on a de Bruijn graph, corrected on-the-fly by auxiliary modifications of the phases associated with the arcs.

  1. Accelerating commutation circuits in quantum computer networks

    NASA Astrophysics Data System (ADS)

    Jiang, Min; Huang, Xu; Chen, Xiaoping; Zhang, Zeng-ke

    2012-12-01

    In a high speed and packet-switched quantum computer network, a packet routing delay often leads to traffic jams, becoming a severe bottleneck for speeding up the transmission rate. Based on the delayed commutation circuit proposed in Phys. Rev. Lett. 97, 110502 (2006), we present an improved scheme for accelerating network transmission. For two more realistic scenarios, we utilize the characteristic of a quantum state to simultaneously implement a data switch and transmission that makes it possible to reduce the packet delay and route a qubit packet even before its address is determined. This circuit is further extended to the quantum network for the transmission of the unknown quantum information. The analysis demonstrates that quantum communication technology can considerably reduce the processing delay time and build faster and more efficient packet-switched networks.

  2. Quantum learning in a quantum lattice gas computer

    NASA Astrophysics Data System (ADS)

    Behrman, Elizabeth; Steck, James

    2015-04-01

    Quantum lattice gas is the logical generalization of quantum cellular automata. At low energy the dynamics are well described by the Gross-Pitaevskii equation in the mean field limit, which is an effective nonlinear interaction model of a Bose-Einstein condensate. In previous work, we have shown in simulation that both spatial and temporal models of quantum learning computers can be used to ``design'' non-trivial quantum algorithms. The advantages of quantum learning over the usual practice of using quantum gate building blocks are, first, the rapidity with which the problem can be solved, without having to decompose the problem; second, the fact that our technique can be used readily even when the problem, or the operator, is not well understood; and, third, that because the interactions are a natural part of the physical system, connectivity is automatic. The advantage to quantum learning obviously grows with the size and the complexity of the problem. We develop and present our learning algorithm as applied to the mean field lattice gas equation, and present a few preliminary results.

  3. Quantum learning for a quantum lattice gas computer

    NASA Astrophysics Data System (ADS)

    Behrman, Elizabeth; Steck, James

    2015-03-01

    Quantum lattice gas is the logical generalization of quantum cellular automata. In low energy the dynamics are well described by the Gross-Pitaevskii equation in the mean field limit, which is an effective nonlinear interaction model of a Bose-Einstein condensate. In previous work, we have shown in simulation that both spatial and temporal models of quantum learning computers can be used to ``design'' non-trivial quantum algorithms. The advantages of quantum learning over the usual practice of using quantum gate building blocks are, first, the rapidity with which the problem can be solved, without having to decompose the problem; second, the fact that our technique can be used readily even when the problem, or the operator, is not well understood; and, third, that because the interactions are a natural part of the physical system, connectivity is automatic. The advantage to quantum learning obviously grows with the size and the complexity of the problem. We develop and present our learning algorithm as applied to the mean field lattice gas equation, and present a few preliminary results.

  4. Entanglement and Quantum Computation: An Overview

    SciTech Connect

    Perez, R.B.

    2000-06-27

    This report presents a selective compilation of basic facts from the fields of particle entanglement and quantum information processing prepared for those non-experts in these fields that may have interest in an area of physics showing counterintuitive, ''spooky'' (Einstein's words) behavior. In fact, quantum information processing could, in the near future, provide a new technology to sustain the benefits to the U.S. economy due to advanced computer technology.

  5. Computations in quantum mechanics made easy

    NASA Astrophysics Data System (ADS)

    Korsch, H. J.; Rapedius, K.

    2016-09-01

    Convenient and simple numerical techniques for performing quantum computations based on matrix representations of Hilbert space operators are presented and illustrated by various examples. The applications include the calculations of spectral and dynamical properties for one-dimensional and two-dimensional single-particle systems as well as bosonic many-particle and open quantum systems. Due to their technical simplicity these methods are well suited as a tool for teaching quantum mechanics to undergraduates and graduates. Explicit implementations of the presented numerical methods in Matlab are given.

  6. Information-theoretic temporal Bell inequality and quantum computation

    SciTech Connect

    Morikoshi, Fumiaki

    2006-05-15

    An information-theoretic temporal Bell inequality is formulated to contrast classical and quantum computations. Any classical algorithm satisfies the inequality, while quantum ones can violate it. Therefore, the violation of the inequality is an immediate consequence of the quantumness in the computation. Furthermore, this approach suggests a notion of temporal nonlocality in quantum computation.

  7. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    ERIC Educational Resources Information Center

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  8. In-Memory Computing Architectures for Sparse Distributed Memory.

    PubMed

    Kang, Mingu; Shanbhag, Naresh R

    2016-08-01

    This paper presents an energy-efficient and high-throughput architecture for Sparse Distributed Memory (SDM)-a computational model of the human brain [1]. The proposed SDM architecture is based on the recently proposed in-memory computing kernel for machine learning applications called Compute Memory (CM) [2], [3]. CM achieves energy and throughput efficiencies by deeply embedding computation into the memory array. SDM-specific techniques such as hierarchical binary decision (HBD) are employed to reduce the delay and energy further. The CM-based SDM (CM-SDM) is a mixed-signal circuit, and hence circuit-aware behavioral, energy, and delay models in a 65 nm CMOS process are developed in order to predict system performance of SDM architectures in the auto- and hetero-associative modes. The delay and energy models indicate that CM-SDM, in general, can achieve up to 25 × and 12 × delay and energy reduction, respectively, over conventional SDM. When classifying 16 × 16 binary images with high noise levels (input bad pixel ratios: 15%-25%) into nine classes, all SDM architectures are able to generate output bad pixel ratios (Bo) ≤ 2%. The CM-SDM exhibits negligible loss in accuracy, i.e., its Bo degradation is within 0.4% as compared to that of the conventional SDM. PMID:27305686

  9. Phonon-based scalable quantum computing and sensing (Presentation Video)

    NASA Astrophysics Data System (ADS)

    El-Kady, Ihab

    2015-04-01

    Quantum computing fundamentally depends on the ability to concurrently entangle and individually address/control a large number of qubits. In general, the primary inhibitors of large scale entanglement are qubit dependent; for example inhomogeneity in quantum dots, spectral crowding brought about by proximity-based entanglement in ions, weak interactions of neutral atoms, and the fabrication tolerances in the case of Si-vacancies or SQUIDs. We propose an inherently scalable solid-state qubit system with individually addressable qubits based on the coupling of a phonon with an acceptor impurity in a high-Q Phononic Crystal resonant cavity. Due to their unique nonlinear properties, phonons enable new opportunities for quantum devices and physics. We present a phononic crystal-based platform for observing the phonon analogy of cavity quantum electrodynamics, called phonodynamics, in a solid-state system. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables strong coupling of the phonon modes to the energy levels of the atom. A qubit is then created by entangling a phonon at the resonance frequency of the cavity with the atomic acceptor states. We show theoretical optimization of the cavity design and excitation waveguides, along with estimated performance figures of the phoniton system. Qubits based on this half-sound, half-matter quasi-particle, may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.

  10. Trading Classical and Quantum Computational Resources

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Smith, Graeme; Smolin, John A.

    2016-04-01

    We propose examples of a hybrid quantum-classical simulation where a classical computer assisted by a small quantum processor can efficiently simulate a larger quantum system. First, we consider sparse quantum circuits such that each qubit participates in O (1 ) two-qubit gates. It is shown that any sparse circuit on n +k qubits can be simulated by sparse circuits on n qubits and a classical processing that takes time 2O (k )poly (n ) . Second, we study Pauli-based computation (PBC), where allowed operations are nondestructive eigenvalue measurements of n -qubit Pauli operators. The computation begins by initializing each qubit in the so-called magic state. This model is known to be equivalent to the universal quantum computer. We show that any PBC on n +k qubits can be simulated by PBCs on n qubits and a classical processing that takes time 2O (k )poly (n ). Finally, we propose a purely classical algorithm that can simulate a PBC on n qubits in a time 2α npoly (n ) , where α ≈0.94 . This improves upon the brute-force simulation method, which takes time 2npoly (n ). Our algorithm exploits the fact that n -fold tensor products of magic states admit a low-rank decomposition into n -qubit stabilizer states.

  11. Quantum Computing Without Wavefunctions: Time-Dependent Density Functional Theory for Universal Quantum Computation

    PubMed Central

    Tempel, David G.; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms. PMID:22553483

  12. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    PubMed

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms. PMID:22553483

  13. Random Numbers and Quantum Computers

    ERIC Educational Resources Information Center

    McCartney, Mark; Glass, David

    2002-01-01

    The topic of random numbers is investigated in such a way as to illustrate links between mathematics, physics and computer science. First, the generation of random numbers by a classical computer using the linear congruential generator and logistic map is considered. It is noted that these procedures yield only pseudo-random numbers since…

  14. Towards universal quantum computation through relativistic motion

    PubMed Central

    Bruschi, David Edward; Sabín, Carlos; Kok, Pieter; Johansson, Göran; Delsing, Per; Fuentes, Ivette

    2016-01-01

    We show how to use relativistic motion to generate continuous variable Gaussian cluster states within cavity modes. Our results can be demonstrated experimentally using superconducting circuits where tuneable boundary conditions correspond to mirrors moving with velocities close to the speed of light. In particular, we propose the generation of a quadripartite square cluster state as a first example that can be readily implemented in the laboratory. Since cluster states are universal resources for universal one-way quantum computation, our results pave the way for relativistic quantum computation schemes. PMID:26860584

  15. Percolation, renormalization, and quantum computing with nondeterministic gates.

    PubMed

    Kieling, K; Rudolph, T; Eisert, J

    2007-09-28

    We apply a notion of static renormalization to the preparation of entangled states for quantum computing, exploiting ideas from percolation theory. Such a strategy yields a novel way to cope with the randomness of nondeterministic quantum gates. This is most relevant in the context of optical architectures, where probabilistic gates are common, and cold atoms in optical lattices, where hole defects occur. We demonstrate how to efficiently construct cluster states without the need for rerouting, thereby avoiding a massive amount of conditional dynamics; we furthermore show that except for a single layer of gates during the preparation, all subsequent operations can be shifted to the final adapted single-qubit measurements. Remarkably, cluster state preparation is achieved using essentially the same scaling in resources as if deterministic gates were available. PMID:17930565

  16. Mimicking time evolution within a quantum ground state: Ground-state quantum computation, cloning, and teleportation

    SciTech Connect

    Mizel, Ari

    2004-07-01

    Ground-state quantum computers mimic quantum-mechanical time evolution within the amplitudes of a time-independent quantum state. We explore the principles that constrain this mimicking. A no-cloning argument is found to impose strong restrictions. It is shown, however, that there is flexibility that can be exploited using quantum teleportation methods to improve ground-state quantum computer design.

  17. Quantum game simulator, using the circuit model of quantum computation

    NASA Astrophysics Data System (ADS)

    Vlachos, Panagiotis; Karafyllidis, Ioannis G.

    2009-10-01

    We present a general two-player quantum game simulator that can simulate any two-player quantum game described by a 2×2 payoff matrix (two strategy games).The user can determine the payoff matrices for both players, their strategies and the amount of entanglement between their initial strategies. The outputs of the simulator are the expected payoffs of each player as a function of the other player's strategy parameters and the amount of entanglement. The simulator also produces contour plots that divide the strategy spaces of the game in regions in which players can get larger payoffs if they choose to use a quantum strategy against any classical one. We also apply the simulator to two well-known quantum games, the Battle of Sexes and the Chicken game. Program summaryProgram title: Quantum Game Simulator (QGS) Catalogue identifier: AEED_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEED_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3416 No. of bytes in distributed program, including test data, etc.: 583 553 Distribution format: tar.gz Programming language: Matlab R2008a (C) Computer: Any computer that can sufficiently run Matlab R2008a Operating system: Any system that can sufficiently run Matlab R2008a Classification: 4.15 Nature of problem: Simulation of two player quantum games described by a payoff matrix. Solution method: The program calculates the matrices that comprise the Eisert setup for quantum games based on the quantum circuit model. There are 5 parameters that can be altered. We define 3 of them as constant. We play the quantum game for all possible values for the other 2 parameters and store the results in a matrix. Unusual features: The software provides an easy way of simulating any two-player quantum games. Running time: Approximately

  18. Portable computer system architecture for the Space Station Freedom program

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Liu, Yuan-Kwei; Fernquist, Alan R.

    1993-01-01

    This paper outlines various mission requirements and technical approaches that support the potential use of portable computers in several defined activities within the Space Station Freedom (SSF) program. Specifically, the use of portable computers as consoles for both spacecraft control and payload applications is presented. Various issues and proposed solutions regarding the incorporation of portable computers within the program are presented. The primary issues presented regard architecture (standard interface for expansion, advanced processors and displays), integration (methods of high-speed data communication, peripheral interfaces, and interconnectivity within various support networks), and evolution (wireless communications and multimedia data interface methods).

  19. Reviews of computing technology: A review of compound document architectures

    SciTech Connect

    Hudson, B.J.

    1991-10-01

    This review of computing technology will define, describe, and give examples of various approaches to document management through the use of compound document architectures. Experts agree that only 10% of business information exists in machine readable form, but much of what is stored is not in useful form. As a result, the average business document is copied over a dozen times during its life and duplicate copies are stored in numerous locations. The goal of compound document architectures is to provide an information support environment where rapid access to the correct information in the proper format is simplified. A compound document architecture provides structure to seemingly unstructured electronic documents, and standardizes the methods for interchange and access of entire or partial documents by authors and users.

  20. Universality of computation in real quantum theory

    NASA Astrophysics Data System (ADS)

    Belenchia, A.; D'Ariano, G. M.; Perinotti, P.

    2013-10-01

    Recently de la Torre et al. (Phys. Rev. Lett., 109 (2012) 090403) reconstructed Quantum Theory from its local structure on the basis of local discriminability and the existence of a one-parameter group of bipartite transformations containing an entangling gate. This result relies on universality of any entangling gate for quantum computation. Here we prove universality of C-NOT with local gates for Real Quantum Theory (RQT), showing that the universality requirement would not be sufficient for the result, whereas local discriminability and the local qubit structure play a crucial role. For reversible computation, generally an extra rebit is needed for RQT. As a by-product we also provide a short proof of universality of C-NOT for CQT.

  1. Ion photon networks for quantum computing and quantum repeaters

    NASA Astrophysics Data System (ADS)

    Clark, Susan; Hayes, David; Hucul, David; Inlek, I. Volkan; Monroe, Christopher

    2013-03-01

    Quantum information based on ion-trap technology is well regarded for its stability, high detection fidelity, and ease of manipulation. Here we demonstrate a proof of principle experiment for scaling this technology to large numbers of ions in separate traps by linking the ions via photons. We give results for entanglement between distant ions via probabilistic photonic gates that is then swapped between ions in the same trap via deterministic Coulombic gates. We report fidelities above 65% and show encouraging preliminary results for the next stage of experimental improvement. Such a system could be used for quantum computing requiring large numbers of qubits or for quantum repeaters requiring the qubits to be separated by large distances.

  2. Simulations of Probabilities for Quantum Computing

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.

  3. Blind Quantum Computing with Weak Coherent Pulses

    NASA Astrophysics Data System (ADS)

    Dunjko, Vedran; Kashefi, Elham; Leverrier, Anthony

    2012-05-01

    The universal blind quantum computation (UBQC) protocol [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual IEEE Symposiumon Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, USA, 2009), pp. 517-526.] allows a client to perform quantum computation on a remote server. In an ideal setting, perfect privacy is guaranteed if the client is capable of producing specific, randomly chosen single qubit states. While from a theoretical point of view, this may constitute the lowest possible quantum requirement, from a pragmatic point of view, generation of such states to be sent along long distances can never be achieved perfectly. We introduce the concept of ɛ blindness for UBQC, in analogy to the concept of ɛ security developed for other cryptographic protocols, allowing us to characterize the robustness and security properties of the protocol under possible imperfections. We also present a remote blind single qubit preparation protocol with weak coherent pulses for the client to prepare, in a delegated fashion, quantum states arbitrarily close to perfect random single qubit states. This allows us to efficiently achieve ɛ-blind UBQC for any ɛ>0, even if the channel between the client and the server is arbitrarily lossy.

  4. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  5. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  6. Nanotube devices based crossbar architecture: toward neuromorphic computing.

    PubMed

    Zhao, W S; Agnus, G; Derycke, V; Filoramo, A; Bourgoin, J-P; Gamrat, C

    2010-04-30

    Nanoscale devices such as carbon nanotube and nanowires based transistors, memristors and molecular devices are expected to play an important role in the development of new computing architectures. While their size represents a decisive advantage in terms of integration density, it also raises the critical question of how to efficiently address large numbers of densely integrated nanodevices without the need for complex multi-layer interconnection topologies similar to those used in CMOS technology. Two-terminal programmable devices in crossbar geometry seem particularly attractive, but suffer from severe addressing difficulties due to cross-talk, which implies complex programming procedures. Three-terminal devices can be easily addressed individually, but with limited gain in terms of interconnect integration. We show how optically gated carbon nanotube devices enable efficient individual addressing when arranged in a crossbar geometry with shared gate electrodes. This topology is particularly well suited for parallel programming or learning in the context of neuromorphic computing architectures. PMID:20368686

  7. FFT Computation with Systolic Arrays, A New Architecture

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The use of the Cooley-Tukey algorithm for computing the l-d FFT lends itself to a particular matrix factorization which suggests direct implementation by linearly-connected systolic arrays. Here we present a new systolic architecture that embodies this algorithm. This implementation requires a smaller number of processors and a smaller number of memory cells than other recent implementations, as well as having all the advantages of systolic arrays. For the implementation of the decimation-in-frequency case, word-serial data input allows continuous real-time operation without the need of a serial-to-parallel conversion device. No control or data stream switching is necessary. Computer simulation of this architecture was done in the context of a 1024 point DFT with a fixed point processor, and CMOS processor implementation has started.

  8. A fully programmable computing architecture for medical ultrasound machines.

    PubMed

    Schneider, Fabio Kurt; Agarwal, Anup; Yoo, Yang Mo; Fukuoka, Tetsuya; Kim, Yongmin

    2010-03-01

    Application-specific ICs have been traditionally used to support the high computational and data rate requirements in medical ultrasound systems, particularly in receive beamforming. Utilizing the previously developed efficient front-end algorithms, in this paper, we present a simple programmable computing architecture, consisting of a field-programmable gate array (FPGA) and a digital signal processor (DSP), to support core ultrasound signal processing. It was found that 97.3% and 51.8% of the FPGA and DSP resources are, respectively, needed to support all the front-end and back-end processing for B-mode imaging with 64 channels and 120 scanlines per frame at 30 frames/s. These results indicate that this programmable architecture can meet the requirements of low- and medium-level ultrasound machines while providing a flexible platform for supporting the development and deployment of new algorithms and emerging clinical applications. PMID:19546045

  9. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  10. Integration of nanoscale memristor synapses in neuromorphic computing architectures

    NASA Astrophysics Data System (ADS)

    Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis

    2013-09-01

    Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.

  11. Efficient Universal Computing Architectures for Decoding Neural Activity

    PubMed Central

    Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul

    2012-01-01

    The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient

  12. Deterministic quantum computation with one photonic qubit

    NASA Astrophysics Data System (ADS)

    Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.

    2015-07-01

    We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.

  13. Hardware architecture for full analytical Fraunhofer computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Pang, Zhi-Yong; Xu, Zong-Xi; Xiong, Yi; Chen, Biao; Dai, Hui-Min; Jiang, Shao-Ji; Dong, Jian-Wen

    2015-09-01

    Hardware architecture of parallel computation is proposed for generating Fraunhofer computer-generated holograms (CGHs). A pipeline-based integrated circuit architecture is realized by employing the modified Fraunhofer analytical formulism, which is large scale and enables all components to be concurrently operated. The architecture of the CGH contains five modules to calculate initial parameters of amplitude, amplitude compensation, phases, and phase compensation, respectively. The precalculator of amplitude is fully adopted considering the "reusable design" concept. Each complex operation type (such as square arithmetic) is reused only once by means of a multichannel selector. The implemented hardware calculates an 800×600 pixels hologram in parallel using 39,319 logic elements, 21,074 registers, and 12,651 memory bits in an Altera field-programmable gate array environment with stable operation at 50 MHz. Experimental results demonstrate that the quality of the images reconstructed from the hardware-generated hologram can be comparable to that of a software implementation. Moreover, the calculation speed is approximately 100 times faster than that of a personal computer with an Intel i5-3230M 2.6 GHz CPU for a triangular object.

  14. Scheme for Quantum Computing Immune to Decoherence

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vatan, Farrokh

    2008-01-01

    A constructive scheme has been devised to enable mapping of any quantum computation into a spintronic circuit in which the computation is encoded in a basis that is, in principle, immune to quantum decoherence. The scheme is implemented by an algorithm that utilizes multiple physical spins to encode each logical bit in such a way that collective errors affecting all the physical spins do not disturb the logical bit. The scheme is expected to be of use to experimenters working on spintronic implementations of quantum logic. Spintronic computing devices use quantum-mechanical spins (typically, electron spins) to encode logical bits. Bits thus encoded (denoted qubits) are potentially susceptible to errors caused by noise and decoherence. The traditional model of quantum computation is based partly on the assumption that each qubit is implemented by use of a single two-state quantum system, such as an electron or other spin-1.2 particle. It can be surprisingly difficult to achieve certain gate operations . most notably, those of arbitrary 1-qubit gates . in spintronic hardware according to this model. However, ironically, certain 2-qubit interactions (in particular, spin-spin exchange interactions) can be achieved relatively easily in spintronic hardware. Therefore, it would be fortunate if it were possible to implement any 1-qubit gate by use of a spin-spin exchange interaction. While such a direct representation is not possible, it is possible to achieve an arbitrary 1-qubit gate indirectly by means of a sequence of four spin-spin exchange interactions, which could be implemented by use of four exchange gates. Accordingly, the present scheme provides for mapping any 1-qubit gate in the logical basis into an equivalent sequence of at most four spin-spin exchange interactions in the physical (encoded) basis. The complexity of the mathematical derivation of the scheme from basic quantum principles precludes a description within this article; it must suffice to report

  15. Quantum Computation with Phase Drift Errors

    NASA Astrophysics Data System (ADS)

    Miquel, César; Paz, Juan Pablo; Zurek, Wojciech Hubert

    1997-05-01

    We numerically simulate the evolution of an ion trap quantum computer made out of 18 ions subject to a sequence of nearly 15 000 laser pulses in order to find the prime factors of N = 15. We analyze the effect of random and systematic phase drift errors arising from inaccuracies in the laser pulses which induce over (under) rotation of the quantum state. Simple analytic estimates of the tolerance for the quality of driving pulses are presented. We examine the use of watchdog stabilization to partially correct phase drift errors concluding that, in the regime investigated, it is rather inefficient.

  16. Discrete Wigner functions and quantum computational speedup

    SciTech Connect

    Galvao, Ernesto F.

    2005-04-01

    Gibbons et al. [Phys. Rev. A 70, 062101 (2004)] have recently defined a class of discrete Wigner functions W to represent quantum states in a finite Hilbert space dimension d. I characterize the set C{sub d} of states having non-negative W simultaneously in all definitions of W in this class. For d{<=}5 I show C{sub d} is the convex hull of stabilizer states. This supports the conjecture that negativity of W is necessary for exponential speedup in pure-state quantum computation.

  17. Nanoscale phosphorus atom arrays created using STM for the fabrication of a silicon based quantum computer.

    SciTech Connect

    O'Brien, J. L.; Schofield, S. R.; Simmons, M. Y.; Clark, R. G.; Dzurak, A. S.; Curson, N. J.; Kane, B. E.; McAlpine, N. S.; Hawley, M. E.; Brown, G. W.

    2001-01-01

    Quantum computers offer the promise of formidable computational power for certain tasks. Of the various possible physical implementations of such a device, silicon based architectures are attractive for their scalability and ease of integration with existing silicon technology. These designs use either the electron or nuclear spin state of single donor atoms to store quantum information. Here we describe a strategy to fabricate an array of single phosphorus atoms in silicon for the construction of such a silicon based quantum computer. We demonstrate the controlled placement of single phosphorus bearing molecules on a silicon surface. This has been achieved by patterning a hydrogen mono-layer 'resist' with a scanning tunneling microscope (STM) tip and exposing the patterned surface to phosphine (PH3) molecules. We also describe preliminary studies into a process to incorporate these surface phosphorus atoms into the silicon crystal at the array sites. Keywords: Quantum computing, nanotechriology scanning turincling microscopy, hydrogen lithography

  18. Architectural requirements for the Red Storm computing system.

    SciTech Connect

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  19. Parallel algorithms and architecture for computation of manipulator forward dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.

  20. Performance evaluation of the SX-6 vector architecture forscientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri,Jahed; Van der Wijngaart, Rob

    2005-01-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBMPower3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  1. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  2. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  3. HiMAT onboard flight computer system architecture and qualification

    NASA Technical Reports Server (NTRS)

    Myers, A. F.; Earls, M. R.; Callizo, L. A.

    1981-01-01

    Two highly maneuverable aircraft technology (HiMAT) remotely piloted research vehicles (RPRV's) are being flight tested at NASA Dryden Flight Research Center, Edwards, California, to demonstrate and evaluate a number of technological advances applicable to future fighter aircraft. Closed-loop primary flight control is performed from a ground-based cockpit utilizing a digital computer and up/down telemetry links. A backup flight control system for emergency operation resides in one of two onboard computers. Other functions of the onboard computer system are uplink processing, downlink processing, engine control, failure detection, and redundancy management. This paper describes the architecture, functions, and flight qualification of the HiMAT onboard flight computer systems.

  4. Measurement and Information Extraction in Complex Dynamics Quantum Computation

    NASA Astrophysics Data System (ADS)

    Casati, Giulio; Montangero, Simone

    Quantum Information processing has several di.erent applications: some of them can be performed controlling only few qubits simultaneously (e.g. quantum teleportation or quantum cryptography) [1]. Usually, the transmission of large amount of information is performed repeating several times the scheme implemented for few qubits. However, to exploit the advantages of quantum computation, the simultaneous control of many qubits is unavoidable [2]. This situation increases the experimental di.culties of quantum computing: maintaining quantum coherence in a large quantum system is a di.cult task. Indeed a quantum computer is a many-body complex system and decoherence, due to the interaction with the external world, will eventually corrupt any quantum computation. Moreover, internal static imperfections can lead to quantum chaos in the quantum register thus destroying computer operability [3]. Indeed, as it has been shown in [4], a critical imperfection strength exists above which the quantum register thermalizes and quantum computation becomes impossible. We showed such e.ects on a quantum computer performing an e.cient algorithm to simulate complex quantum dynamics [5,6].

  5. Quantum computation: algorithms and implementation in quantum dot devices

    NASA Astrophysics Data System (ADS)

    Gamble, John King

    In this thesis, we explore several aspects of both the software and hardware of quantum computation. First, we examine the computational power of multi-particle quantum random walks in terms of distinguishing mathematical graphs. We study both interacting and non-interacting multi-particle walks on strongly regular graphs, proving some limitations on distinguishing powers and presenting extensive numerical evidence indicative of interactions providing more distinguishing power. We then study the recently proposed adiabatic quantum algorithm for Google PageRank, and show that it exhibits power-law scaling for realistic WWW-like graphs. Turning to hardware, we next analyze the thermal physics of two nearby 2D electron gas (2DEG), and show that an analogue of the Coulomb drag effect exists for heat transfer. In some distance and temperature, this heat transfer is more significant than phonon dissipation channels. After that, we study the dephasing of two-electron states in a single silicon quantum dot. Specifically, we consider dephasing due to the electron-phonon coupling and charge noise, separately treating orbital and valley excitations. In an ideal system, dephasing due to charge noise is strongly suppressed due to a vanishing dipole moment. However, introduction of disorder or anharmonicity leads to large effective dipole moments, and hence possibly strong dephasing. Building on this work, we next consider more realistic systems, including structural disorder systems. We present experiment and theory, which demonstrate energy levels that vary with quantum dot translation, implying a structurally disordered system. Finally, we turn to the issues of valley mixing and valley-orbit hybridization, which occurs due to atomic-scale disorder at quantum well interfaces. We develop a new theoretical approach to study these effects, which we name the disorder-expansion technique. We demonstrate that this method successfully reproduces atomistic tight-binding techniques

  6. Applications of computational quantum mechanics

    NASA Astrophysics Data System (ADS)

    Temel, Burcin

    This original research dissertation is composed of a new numerical technique based on Chebyshev polynomials that is applied on scattering problems, a phenomenological kinetics study for CO oxidation on RuO2 surface, and an experimental study on methanol coupling with doped metal oxide catalysts. Minimum Error Method (MEM), a least-squares minimization method, provides an efficient and accurate alternative to solve systems of ordinary differential equations. Existing methods usually utilize matrix methods which are computationally costful. MEM, which is based on the Chebyshev polynomials as a basis set, uses the recursion relationships and fast Chebyshev transforms which scale as O(N). For large basis set calculations this provides an enormous computational efficiency in the calculations. Chebyshev polynomials are also able to represent non-periodic problems very accurately. We applied MEM on elastic and inelastic scattering problems: it is more efficient and accurate than traditionally used Kohn variational principle, and it also provides the wave function in the interaction region. Phenomenological kinetics (PK) is widely used in industry to predict the optimum conditions for a chemical reaction. PK neglects the fluctuations, assumes no lateral interactions, and considers an ideal mix of reactants. The rate equations are tested by fitting the rate constants to the results of the experiments. Unfortunately, there are numerous examples where a fitted mechanism was later shown to be erroneous. We have undertaken a thorough comparison between the phenomenological equations and the results of kinetic Monte Carlo (KMC) simulations performed on the same system. The PK equations are qualitatively consistent with the KMC results but are quantitatively erroneous as a result of interplays between the adsorption and desorption events. The experimental study on methanol coupling with doped metal oxide catalysts demonstrates the doped metal oxides as a new class of catalysts

  7. Scalable Quantum Computing Over the Rainbow

    NASA Astrophysics Data System (ADS)

    Pfister, Olivier; Menicucci, Nicolas C.; Flammia, Steven T.

    2011-03-01

    The physical implementation of nontrivial quantum computing is an experimental challenge due to decoherence and the need for scalability. Recently we proved a novel theoretical scheme for realizing a scalable quantum register of very large size, entangled in a cluster state, in the optical frequency comb (OFC) defined by the eigenmodes of a single optical parametric oscillator (OPO). The classical OFC is well known as implemented by the femtosecond, carrier-envelope-phase- and mode-locked lasers which have redefined frequency metrology in recent years. The quantum OFC is a set of harmonic oscillators, or Qmodes, whose amplitude and phase quadratures are continuous variables, the manipulation of which is a mature field for one or two Qmodes. We have shown that the nonlinear optical medium of a single OPO can be engineered, in a sophisticated but already demonstrated manner, so as to entangle in constant time the OPO's OFC into a finitely squeezed, Gaussian cluster state suitable for universal quantum computing over continuous variables. Here we summarize our theoretical result and survey the ongoing experimental efforts in this direction.

  8. Dual field theories of quantum computation

    NASA Astrophysics Data System (ADS)

    Vanchurin, Vitaly

    2016-06-01

    Given two quantum states of N q-bits we are interested to find the shortest quantum circuit consisting of only one- and two- q-bit gates that would transfer one state into another. We call it the quantum maze problem for the reasons described in the paper. We argue that in a large N limit the quantum maze problem is equivalent to the problem of finding a semiclassical trajectory of some lattice field theory (the dual theory) on an N +1 dimensional space-time with geometrically flat, but topologically compact spatial slices. The spatial fundamental domain is an N dimensional hyper-rhombohedron, and the temporal direction describes transitions from an arbitrary initial state to an arbitrary target state and so the initial and final dual field theory conditions are described by these two quantum computational states. We first consider a complex Klein-Gordon field theory and argue that it can only be used to study the shortest quantum circuits which do not involve generators composed of tensor products of multiple Pauli Z matrices. Since such situation is not generic we call it the Z-problem. On the dual field theory side the Z-problem corresponds to massless excitations of the phase (Goldstone modes) that we attempt to fix using Higgs mechanism. The simplest dual theory which does not suffer from the massless excitation (or from the Z-problem) is the Abelian-Higgs model which we argue can be used for finding the shortest quantum circuits. Since every trajectory of the field theory is mapped directly to a quantum circuit, the shortest quantum circuits are identified with semiclassical trajectories. We also discuss the complexity of an actual algorithm that uses a dual theory prospective for solving the quantum maze problem and compare it with a geometric approach. We argue that it might be possible to solve the problem in sub-exponential time in 2 N , but for that we must consider the Klein-Gordon theory on curved spatial geometry and/or more complicated (than N -torus

  9. Distributed sequence alignment applications for the public computing architecture.

    PubMed

    Pellicer, S; Chen, G; Chan, K C C; Pan, Y

    2008-03-01

    The public computer architecture shows promise as a platform for solving fundamental problems in bioinformatics such as global gene sequence alignment and data mining with tools such as the basic local alignment search tool (BLAST). Our implementation of these two problems on the Berkeley open infrastructure for network computing (BOINC) platform demonstrates a runtime reduction factor of 1.15 for sequence alignment and 16.76 for BLAST. While the runtime reduction factor of the global gene sequence alignment application is modest, this value is based on a theoretical sequential runtime extrapolated from the calculation of a smaller problem. Because this runtime is extrapolated from running the calculation in memory, the theoretical sequential runtime would require 37.3 GB of memory on a single system. With this in mind, the BOINC implementation not only offers the reduced runtime, but also the aggregation of the available memory of all participant nodes. If an actual sequential run of the problem were compared, a more drastic reduction in the runtime would be seen due to an additional secondary storage I/O overhead for a practical system. Despite the limitations of the public computer architecture, most notably in communication overhead, it represents a practical platform for grid- and cluster-scale bioinformatics computations today and shows great potential for future implementations. PMID:18334454

  10. Communication-efficient parallel architectures and algorithms for image computations

    SciTech Connect

    Alnuweiri, H.M.

    1989-01-01

    The main purpose of this dissertation is the design of efficient parallel techniques for image computations which require global operations on image pixels, as well as the development of parallel architectures with special communication features which can support global data movement efficiently. The class of image problems considered in this dissertation involves global operations on image pixels, and irregular (data-dependent) data movement operations. Such problems include histogramming, component labeling, proximity computations, computing the Hough Transform, computing convexity of regions and related properties such as computing the diameter and a smallest area enclosing rectangle for each region. Images with multiple figures and multiple labeled-sets of pixels are also considered. Efficient solutions to such problems involve integer sorting, graph theoretic techniques, and techniques from computational geometry. Although such solutions are not computationally intensive (they all require O(n{sup 2}) operations to be performed on an n {times} n image), they require global communications. The emphasis here is on developing parallel techniques for data movement, reduction, and distribution, which lead to processor-time optimal solutions for such problems on the proposed organizations. The proposed parallel architectures are based on a memory array which can be viewed as an arrangement of memory modules in a k-dimensional space such that the modules are connected to buses placed parallel to the orthogonal axes of the space, and each bus is connected to one processor or a group of processors. It will be shown that such organizations are communication-efficient and are thus highly suited to the image problems considered here, and also to several other classes of problems. The proposed organizations have p processors and O(n{sup 2}) words of memory to process n {times} n images.

  11. Quantum computing with quantum dots using the Heisenberg exchange interaction

    NASA Astrophysics Data System (ADS)

    Dewaele, Nick J.

    One of the most promising systems for creating a working quantum computer is the triple quantum dots in a linear semiconductor. One of the biggest advantages is that we are able to perform Heisenberg exchange gates on the physical qubits. These exchanges are both fast and relatively low energy. Which means that they would be excellent for producing fast and accurate operations. In order to prevent leakage errors we use a 3 qubit DFS to encode a logical qubit. Here we determine the theoretical time dependent affects of applying the Heisenberg exchange gates in the DFS basis as well as the effect of applying multiple exchange gates at the same time. we also find that applying two heisenberg exchange gates at the same time is an effective way of implementing a leakage elimination operator.

  12. The computational structural mechanics testbed architecture. Volume 2: The interface

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    This is the third set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 3 describes the CLIP-Processor interface and related topics. It is intended only for processor developers.

  13. Integrated command, control, communications and computation system functional architecture

    NASA Technical Reports Server (NTRS)

    Cooley, C. G.; Gilbert, L. E.

    1981-01-01

    The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.

  14. The computational structural mechanics testbed architecture. Volume 2: Directives

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1989-01-01

    This is the second of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language (CLAMP), the command language interpreter (CLIP), and the data manager (GAL). Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 2 describes the CLIP directives in detail. It is intended for intermediate and advanced users.

  15. The computational structural mechanics testbed architecture. Volume 1: The language

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    This is the first set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP, and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 1 presents the basic elements of the CLAMP language and is intended for all users.

  16. Non-unitary probabilistic quantum computing circuit and method

    NASA Technical Reports Server (NTRS)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  17. Universal quantum gates for Single Cooper Pair Box based quantum computing

    NASA Technical Reports Server (NTRS)

    Echternach, P.; Williams, C. P.; Dultz, S. C.; Braunstein, S.; Dowling, J. P.

    2000-01-01

    We describe a method for achieving arbitrary 1-qubit gates and controlled-NOT gates within the context of the Single Cooper Pair Box (SCB) approach to quantum computing. Such gates are sufficient to support universal quantum computation.

  18. Universal quantum computation with metaplectic anyons

    SciTech Connect

    Cui, Shawn X.; Wang, Zhenghan E-mail: zhenghwa@microsoft.com

    2015-03-15

    We show that braidings of the metaplectic anyons X{sub ϵ} in SO(3){sub 2} = SU(2){sub 4} with their total charge equal to the metaplectic mode Y supplemented with projective measurements of the total charge of two metaplectic anyons are universal for quantum computation. We conjecture that similar universal anyonic computing models can be constructed for all metaplectic anyon systems SO(p){sub 2} for any odd prime p ≥ 5. In order to prove universality, we find new conceptually appealing universal gate sets for qutrits and qupits.

  19. PREFACE: Quantum Information, Communication, Computation and Cryptography

    NASA Astrophysics Data System (ADS)

    Benatti, F.; Fannes, M.; Floreanini, R.; Petritis, D.

    2007-07-01

    The application of quantum mechanics to information related fields such as communication, computation and cryptography is a fast growing line of research that has been witnessing an outburst of theoretical and experimental results, with possible practical applications. On the one hand, quantum cryptography with its impact on secrecy of transmission is having its first important actual implementations; on the other hand, the recent advances in quantum optics, ion trapping, BEC manipulation, spin and quantum dot technologies allow us to put to direct test a great deal of theoretical ideas and results. These achievements have stimulated a reborn interest in various aspects of quantum mechanics, creating a unique interplay between physics, both theoretical and experimental, mathematics, information theory and computer science. In view of all these developments, it appeared timely to organize a meeting where graduate students and young researchers could be exposed to the fundamentals of the theory, while senior experts could exchange their latest results. The activity was structured as a school followed by a workshop, and took place at The Abdus Salam International Center for Theoretical Physics (ICTP) and The International School for Advanced Studies (SISSA) in Trieste, Italy, from 12-23 June 2006. The meeting was part of the activity of the Joint European Master Curriculum Development Programme in Quantum Information, Communication, Cryptography and Computation, involving the Universities of Cergy-Pontoise (France), Chania (Greece), Leuven (Belgium), Rennes1 (France) and Trieste (Italy). This special issue of Journal of Physics A: Mathematical and Theoretical collects 22 contributions from well known experts who took part in the workshop. They summarize the present day status of the research in the manifold aspects of quantum information. The issue is opened by two review articles, the first by G Adesso and F Illuminati discussing entanglement in continuous variable

  20. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  1. Evaluation of leading scalar and vector architectures for scientific computations

    SciTech Connect

    Simon, Horst D.; Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Ethier, Stephane; Shalf, John

    2004-04-20

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This project examines the performance of the cacheless vector Earth Simulator (ES) and compares it to superscalar cache-based IBM Power3 system. Results demonstrate that the ES is significantly faster than the Power3 architecture, highlighting the tremendous potential advantage of the ES for numerical simulation. However, vectorization of a particle-in-cell application (GTC) greatly increased the memory footprint preventing loop-level parallelism and limiting scalability potential.

  2. The Tradeoffs of Fused Memory Hierarchies in Heterogeneous Computing Architectures

    SciTech Connect

    Spafford, Kyle L; Meredith, Jeremy S; Lee, Seyong; Li, Dong; Roth, Philip C; Vetter, Jeffrey S

    2012-01-01

    With the rise of general purpose computing on graphics processing units (GPGPU), the influence from consumer markets can now be seen across the spectrum of computer architectures. In fact, many of the high-ranking Top500 HPC systems now include these accelerators. Traditionally, GPUs have connected to the CPU via the PCIe bus, which has proved to be a significant bottleneck for scalable scientific applications. Now, a trend toward tighter integration between CPU and GPU has removed this bottleneck and unified the memory hierarchy for both CPU and GPU cores. We examine the impact of this trend for high performance scientific computing by investigating AMD's new Fusion Accelerated Processing Unit (APU) as a testbed. In particular, we evaluate the tradeoffs in performance, power consumption, and programmability when comparing this unified memory hierarchy with similar, but discrete GPUs.

  3. Hard chaos, quantum billiards, and quantum dot computers

    SciTech Connect

    Mainieri, R.; Cvitanovic, P.; Hasslacher, B.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Research was performed in analytic and computational techniques for dealing with hard chaos, especially the powerful tool of cycle expansions. This work has direct application to the understanding of electrons in nanodevices, such as junctions of quantum wires, or in arrays of dots or antidots. We developed a series of techniques for computing the properties of quantum systems with hard chaos, in particular the flow of electrons through nanodevices. These techniques are providing the insight and tools to design computers with nanoscale components. Recent efforts concentrated on understanding the effects of noise and orbit pruning in chaotic dynamical systems. We showed that most complicated chaotic systems (not just those equivalent to a finite shift) will develop branch points in their cycle expansion. Once the singularity is known to exist, it can be removed with a dramatic increase in the speed of convergence of quantities of physical interest.

  4. Double-layer-gate architecture for few-hole GaAs quantum dots.

    PubMed

    Wang, D Q; Hamilton, A R; Farrer, I; Ritchie, D A; Klochan, O

    2016-08-19

    We report the fabrication of single and double hole quantum dots using a double-layer-gate design on an undoped accumulation mode [Formula: see text]/GaAs heterostructure. Electrical transport measurements of a single quantum dot show varying addition energies and clear excited states. In addition, the two-level-gate architecture can also be configured into a double quantum dot with tunable inter-dot coupling. PMID:27389108

  5. Double-layer-gate architecture for few-hole GaAs quantum dots

    NASA Astrophysics Data System (ADS)

    Wang, D. Q.; Hamilton, A. R.; Farrer, I.; Ritchie, D. A.; Klochan, O.

    2016-08-01

    We report the fabrication of single and double hole quantum dots using a double-layer-gate design on an undoped accumulation mode {{Al}}x{{Ga}}1-x{As}/GaAs heterostructure. Electrical transport measurements of a single quantum dot show varying addition energies and clear excited states. In addition, the two-level-gate architecture can also be configured into a double quantum dot with tunable inter-dot coupling.

  6. Biomorphic Multi-Agent Architecture for Persistent Computing

    NASA Technical Reports Server (NTRS)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  7. Symmetrically private information retrieval based on blind quantum computing

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Yu, Jianping; Wang, Ping; Xu, Lingling

    2015-05-01

    Universal blind quantum computation (UBQC) is a new secure quantum computing protocol which allows a user Alice who does not have any sophisticated quantum technology to delegate her computing to a server Bob without leaking any privacy. Using the features of UBQC, we propose a protocol to achieve symmetrically private information retrieval, which allows a quantum limited Alice to query an item from Bob with a fully fledged quantum computer; meanwhile, the privacy of both parties is preserved. The security of our protocol is based on the assumption that malicious Alice has no quantum computer, which avoids the impossibility proof of Lo. For the honest Alice, she is almost classical and only requires minimal quantum resources to carry out the proposed protocol. Therefore, she does not need any expensive laboratory which can maintain the coherence of complicated quantum experimental setups.

  8. A Component Architecture for High-Performance Computing

    SciTech Connect

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  9. Novel systems and methods for quantum communication, quantum computation, and quantum simulation

    NASA Astrophysics Data System (ADS)

    Gorshkov, Alexey Vyacheslavovich

    Precise control over quantum systems can enable the realization of fascinating applications such as powerful computers, secure communication devices, and simulators that can elucidate the physics of complex condensed matter systems. However, the fragility of quantum effects makes it very difficult to harness the power of quantum mechanics. In this thesis, we present novel systems and tools for gaining fundamental insights into the complex quantum world and for bringing practical applications of quantum mechanics closer to reality. We first optimize and show equivalence between a wide range of techniques for storage of photons in atomic ensembles. We describe experiments demonstrating the potential of our optimization algorithms for quantum communication and computation applications. Next, we combine the technique of photon storage with strong atom-atom interactions to propose a robust protocol for implementing the two-qubit photonic phase gate, which is an important ingredient in many quantum computation and communication tasks. In contrast to photon storage, many quantum computation and simulation applications require individual addressing of closely-spaced atoms, ions, quantum dots, or solid state defects. To meet this requirement, we propose a method for coherent optical far-field manipulation of quantum systems with a resolution that is not limited by the wavelength of radiation. While alkali atoms are currently the system of choice for photon storage and many other applications, we develop new methods for quantum information processing and quantum simulation with ultracold alkaline-earth atoms in optical lattices. We show how multiple qubits can be encoded in individual alkaline-earth atoms and harnessed for quantum computing and precision measurements applications. We also demonstrate that alkaline-earth atoms can be used to simulate highly symmetric systems exhibiting spin-orbital interactions and capable of providing valuable insights into strongly

  10. Rapid indirect trajectory optimization on highly parallel computing architectures

    NASA Astrophysics Data System (ADS)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical

  11. Quantum computation over the butterfly network

    SciTech Connect

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.; Murao, Mio

    2011-07-15

    In order to investigate distributed quantum computation under restricted network resources, we introduce a quantum computation task over the butterfly network where both quantum and classical communications are limited. We consider deterministically performing a two-qubit global unitary operation on two unknown inputs given at different nodes, with outputs at two distinct nodes. By using a particular resource setting introduced by M. Hayashi [Phys. Rev. A 76, 040301(R) (2007)], which is capable of performing a swap operation by adding two maximally entangled qubits (ebits) between the two input nodes, we show that unitary operations can be performed without adding any entanglement resource, if and only if the unitary operations are locally unitary equivalent to controlled unitary operations. Our protocol is optimal in the sense that the unitary operations cannot be implemented if we relax the specifications of any of the channels. We also construct protocols for performing controlled traceless unitary operations with a 1-ebit resource and for performing global Clifford operations with a 2-ebit resource.

  12. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    ERIC Educational Resources Information Center

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  13. Multiple-server Flexible Blind Quantum Computation in Networks

    NASA Astrophysics Data System (ADS)

    Kong, Xiaoqin; Li, Qin; Wu, Chunhui; Yu, Fang; He, Jinjun; Sun, Zhiyuan

    2016-06-01

    Blind quantum computation (BQC) can allow a client with limited quantum power to delegate his quantum computation to a powerful server and still keep his own data private. In this paper, we present a multiple-server flexible BQC protocol, where a client who only needs the ability of accessing qua ntum channels can delegate the computational task to a number of servers. Especially, the client's quantum computation also can be achieved even when one or more delegated quantum servers break down in networks. In other words, when connections to certain quantum servers are lost, clients can adjust flexibly and delegate their quantum computation to other servers. Obviously it is trivial that the computation will be unsuccessful if all servers are interrupted.

  14. Multiple-server Flexible Blind Quantum Computation in Networks

    NASA Astrophysics Data System (ADS)

    Kong, Xiaoqin; Li, Qin; Wu, Chunhui; Yu, Fang; He, Jinjun; Sun, Zhiyuan

    2016-02-01

    Blind quantum computation (BQC) can allow a client with limited quantum power to delegate his quantum computation to a powerful server and still keep his own data private. In this paper, we present a multiple-server flexible BQC protocol, where a client who only needs the ability of accessing qua ntum channels can delegate the computational task to a number of servers. Especially, the client's quantum computation also can be achieved even when one or more delegated quantum servers break down in networks. In other words, when connections to certain quantum servers are lost, clients can adjust flexibly and delegate their quantum computation to other servers. Obviously it is trivial that the computation will be unsuccessful if all servers are interrupted.

  15. Possible topological quantum computation via Khovanov homology: D-brane topological quantum computer

    NASA Astrophysics Data System (ADS)

    Vélez, Mario; Ospina, Juan

    2009-05-01

    A model of a D-Brane Topological Quantum Computer (DBTQC) is presented and sustained. The model is based on four-dimensional TQFTs of the Donaldson-Witten and Seiber-Witten kinds. It is argued that the DBTQC is able to compute Khovanov homology for knots, links and graphs. The DBTQC physically incorporates the mathematical process of categorification according to which the invariant polynomials for knots, links and graphs such as Jones, HOMFLY, Tutte and Bollobás-Riordan polynomials can be computed as the Euler characteristics corresponding to special homology complexes associated with knots, links and graphs. The DBTQC is conjectured as a powerful universal quantum computer in the sense that the DBTQC computes Khovanov homology which is considered like powerful that the Jones polynomial.

  16. Milestones Toward Majorana-Based Quantum Computing

    NASA Astrophysics Data System (ADS)

    Aasen, David; Hell, Michael; Mishmash, Ryan V.; Higginbotham, Andrew; Danon, Jeroen; Leijnse, Martin; Jespersen, Thomas S.; Folk, Joshua A.; Marcus, Charles M.; Flensberg, Karsten; Alicea, Jason

    2016-07-01

    We introduce a scheme for preparation, manipulation, and read out of Majorana zero modes in semiconducting wires with mesoscopic superconducting islands. Our approach synthesizes recent advances in materials growth with tools commonly used in quantum-dot experiments, including gate control of tunnel barriers and Coulomb effects, charge sensing, and charge pumping. We outline a sequence of milestones interpolating between zero-mode detection and quantum computing that includes (1) detection of fusion rules for non-Abelian anyons using either proximal charge sensors or pumped current, (2) validation of a prototype topological qubit, and (3) demonstration of non-Abelian statistics by braiding in a branched geometry. The first two milestones require only a single wire with two islands, and additionally enable sensitive measurements of the system's excitation gap, quasiparticle poisoning rates, residual Majorana zero-mode splittings, and topological-qubit coherence times. These pre-braiding experiments can be adapted to other manipulation and read out schemes as well.

  17. Modern hardware architectures accelerate porous media flow computations

    NASA Astrophysics Data System (ADS)

    Kulczewski, Michal; Kurowski, Krzysztof; Kierzynka, Michal; Dohnalik, Marek; Kaczmarczyk, Jan; Borujeni, Ali Takbiri

    2012-05-01

    Investigation of rock properties, porosity and permeability particularly, which determines transport media characteristic, is crucial to reservoir engineering. Nowadays, micro-tomography (micro-CT) methods allow to obtain vast of petro-physical properties. The micro-CT method facilitates visualization of pores structures and acquisition of total porosity factor, determined by sticking together 2D slices of scanned rock and applying proper absorption cut-off point. Proper segmentation of pores representation in 3D is important to solve the permeability of porous media. This factor is recently determined by the means of Computational Fluid Dynamics (CFD), a popular method to analyze problems related to fluid flows, taking advantage of numerical methods and constantly growing computing powers. The recent advent of novel multi-, many-core and graphics processing unit (GPU) hardware architectures allows scientists to benefit even more from parallel processing and built-in new features. The high level of parallel scalability offers both, the time-to-solution decrease and greater accuracy - top factors in reservoir engineering. This paper aims to present research results related to fluid flow simulations, particularly solving the total porosity and permeability of porous media, taking advantage of modern hardware architectures. In our approach total porosity is calculated by the means of general-purpose computing on multiple GPUs. This application sticks together 2D slices of scanned rock and by the means of a marching tetrahedra algorithm, creates a 3D representation of pores and calculates the total porosity. Experimental results are compared with data obtained via other popular methods, including Nuclear Magnetic Resonance (NMR), helium porosity and nitrogen permeability tests. Then CFD simulations are performed on a large-scale high performance hardware architecture to solve the flow and permeability of porous media. In our experiments we used Lattice Boltzmann

  18. Minimal computational-space implementation of multiround quantum protocols

    SciTech Connect

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Chiribella, Giulio

    2011-02-15

    A single-party strategy in a multiround quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here, we provide an efficient realization in terms of computational-space resources.

  19. Semiquantum key distribution with secure delegated quantum computation

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution.

  20. Semiquantum key distribution with secure delegated quantum computation.

    PubMed

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a "classical" party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  1. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  2. Fast graph operations in quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Pérez-Delgado, Carlos A.; Fitzsimons, Joseph F.

    2016-03-01

    The connection between certain entangled states and graphs has been heavily studied in the context of measurement-based quantum computation as a tool for understanding entanglement. Here we show that this correspondence can be harnessed in the reverse direction to yield a graph data structure, which allows for more efficient manipulation and comparison of graphs than any possible classical structure. We introduce efficient algorithms for many transformation and comparison operations on graphs represented as graph states, and prove that no classical data structure can have similar performance for the full set of operations studied.

  3. Decoherence in a scalable adiabatic quantum computer

    SciTech Connect

    Ashhab, S.; Johansson, J. R.; Nori, Franco

    2006-11-15

    We consider the effects of decoherence on Landau-Zener crossings encountered in a large-scale adiabatic-quantum-computing setup. We analyze the dependence of the success probability--i.e., the probability for the system to end up in its new ground state--on the noise amplitude and correlation time. We determine the optimal sweep rate that is required to maximize the success probability. We then discuss the scaling of decoherence effects with increasing system size. We find that those effects can be important for large systems, even if they are small for each of the small building blocks.

  4. Optically driven nanostructures as the basis for large-scale quantum computing

    NASA Astrophysics Data System (ADS)

    Tsukanov, Alexander V.

    2008-03-01

    We propose a large-scale quantum computer architecture based upon the regular arrays of dopant atoms implanted into the semiconductor host matrix. The singly-ionized pairs of donors represent charge qubits on which arbitrary quantum operations can be achieved by application of two strongly detuned laser pulses. The implementation of two-qubit operations as well as the qubit read-out utilize the intermediate circuit containing a probe electron that is able to shuttle along the array of ionized ancilla donors providing the indirect conditional coupling between the qubits. The quantum bus strategy enables us to handle the qubits connected in parallel and enhances the efficiency of the quantum information processing. We demonstrate that non-trivial multi-qubit operations in the quantum register (e.g., an entanglement generation) can be accomplished by the sequence of the optical pulses combined with an appropriate voltage gate pattern.

  5. Do multipartite correlations speed up adiabatic quantum computation or quantum annealing?

    NASA Astrophysics Data System (ADS)

    Batle, J.; Ooi, C. H. Raymond; Farouk, Ahmed; Abutalib, M.; Abdalla, S.

    2016-08-01

    Quantum correlations are thought to be the reason why certain quantum algorithms overcome their classical counterparts. Since the nature of this resource is still not fully understood, we shall investigate how multipartite entanglement and non-locality among qubits vary as the quantum computation runs. We shall encounter that quantum measures on the whole system cannot account for their corresponding speedup.

  6. Do multipartite correlations speed up adiabatic quantum computation or quantum annealing?

    NASA Astrophysics Data System (ADS)

    Batle, J.; Ooi, C. H. Raymond; Farouk, Ahmed; Abutalib, M.; Abdalla, S.

    2016-04-01

    Quantum correlations are thought to be the reason why certain quantum algorithms overcome their classical counterparts. Since the nature of this resource is still not fully understood, we shall investigate how multipartite entanglement and non-locality among qubits vary as the quantum computation runs. We shall encounter that quantum measures on the whole system cannot account for their corresponding speedup.

  7. Graph isomorphism and adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Gaitan, Frank; Clark, Lane

    2014-03-01

    In the Graph Isomorphism (GI) problem two N-vertex graphs G and G' are given and the task is to determine whether there exists a permutation of the vertices of G that preserves adjacency and maps G --> G'. If yes (no), then G and G' are said to be isomorphic (non-isomorphic). The GI problem is an important problem in computer science and is thought to be of comparable difficulty to integer factorization. We present a quantum algorithm that solves arbitrary instances of GI, and which provides a novel approach to determining all automorphisms of a graph. The algorithm converts a GI instance to a combinatorial optimization problem that can be solved using adiabatic quantum evolution. Numerical simulation of the algorithm's quantum dynamics shows that it correctly distinguishes non-isomorphic graphs; recognizes isomorphic graphs; and finds the automorphism group of a graph. We also discuss the algorithm's experimental implementation and show how it can be leveraged to solve arbitrary instances of the NP-Complete Sub-Graph Isomorphism problem.

  8. Adiabatic Quantum Computation with Neutral Atoms

    NASA Astrophysics Data System (ADS)

    Biedermann, Grant

    2013-03-01

    We are implementing a new platform for adiabatic quantum computation (AQC)[2] based on trapped neutral atoms whose coupling is mediated by the dipole-dipole interactions of Rydberg states. Ground state cesium atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism,[3,4] thereby providing the requisite entangling interactions. As a benchmark we study a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. In collaboration with Lambert Parazzoli, Sandia National Laboratories; Aaron Hankin, Center for Quantum Information and Control (CQuIC), University of New Mexico; James Chin-Wen Chou, Yuan-Yu Jau, Peter Schwindt, Cort Johnson, and George Burns, Sandia National Laboratories; Tyler Keating, Krittika Goyal, and Ivan Deutsch, Center for Quantum Information and Control (CQuIC), University of New Mexico; and Andrew Landahl, Sandia National Laboratories. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories

  9. Quantum cryptography on multi-user network architectures

    NASA Astrophysics Data System (ADS)

    Kumavor, Patrick D.; Beal, Alan C.; Yelin, Susanne; Donkor, Eric; Wang, Bing C.

    2006-05-01

    Quantum cryptography applies the uncertainty principle and the no-cloning theorem to allow to parties to share a secret key over an ultra-secure link. Present quantum cryptography technologies provide encryption key distribution only between two users. However, practical implementations of encryption key distribution schemes require establishing secure quantum communications amongst multiple users. This paper looks at some of the advantages and drawbacks of some common network topologies that could be used in sending cryptographic keys across a network consisting of multiple users. These topologies are the star, ring, and bus networks. Their performances are compared and analyzed using quantum bit error rate analysis. The paper also presents an experimental demonstration of a six-user quantum key distribution network implemented on a bus topology.

  10. Multiple network alignment on quantum computers

    NASA Astrophysics Data System (ADS)

    Daskin, Anmer; Grama, Ananth; Kais, Sabre

    2014-12-01

    Comparative analyses of graph-structured datasets underly diverse problems. Examples of these problems include identification of conserved functional components (biochemical interactions) across species, structural similarity of large biomolecules, and recurring patterns of interactions in social networks. A large class of such analyses methods quantify the topological similarity of nodes across networks. The resulting correspondence of nodes across networks, also called node alignment, can be used to identify invariant subgraphs across the input graphs. Given graphs as input, alignment algorithms use topological information to assign a similarity score to each -tuple of nodes, with elements (nodes) drawn from each of the input graphs. Nodes are considered similar if their neighbors are also similar. An alternate, equivalent view of these network alignment algorithms is to consider the Kronecker product of the input graphs and to identify high-ranked nodes in the Kronecker product graph. Conventional methods such as PageRank and HITS (Hypertext-Induced Topic Selection) can be used for this purpose. These methods typically require computation of the principal eigenvector of a suitably modified Kronecker product matrix of the input graphs. We adopt this alternate view of the problem to address the problem of multiple network alignment. Using the phase estimation algorithm, we show that the multiple network alignment problem can be efficiently solved on quantum computers. We characterize the accuracy and performance of our method and show that it can deliver exponential speedups over conventional (non-quantum) methods.

  11. Algorithmic cooling and scalable NMR quantum computers

    PubMed Central

    Boykin, P. Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh; Vrijen, Rutger

    2002-01-01

    We present here algorithmic cooling (via polarization heat bath)—a powerful method for obtaining a large number of highly polarized spins in liquid nuclear-spin systems at finite temperature. Given that spin-half states represent (quantum) bits, algorithmic cooling cleans dirty bits beyond the Shannon's bound on data compression, by using a set of rapidly thermal-relaxing bits. Such auxiliary bits could be implemented by using spins that rapidly get into thermal equilibrium with the environment, e.g., electron spins. Interestingly, the interaction with the environment, usually a most undesired interaction, is used here to our benefit, allowing a cooling mechanism. Cooling spins to a very low temperature without cooling the environment could lead to a breakthrough in NMR experiments, and our “spin-refrigerating” method suggests that this is possible. The scaling of NMR ensemble computers is currently one of the main obstacles to building larger-scale quantum computing devices, and our spin-refrigerating method suggests that this problem can be resolved. PMID:11904402

  12. Multiple network alignment on quantum computers

    NASA Astrophysics Data System (ADS)

    Daskin, Anmer; Grama, Ananth; Kais, Sabre

    2014-09-01

    Comparative analyses of graph structured datasets underly diverse problems. Examples of these problems include identification of conserved functional components (biochemical interactions) across species, structural similarity of large biomolecules, and recurring patterns of interactions in social networks. A large class of such analyses methods quantify the topological similarity of nodes across networks. The resulting correspondence of nodes across networks, also called node alignment, can be used to identify invariant subgraphs across the input graphs. Given $k$ graphs as input, alignment algorithms use topological information to assign a similarity score to each $k$-tuple of nodes, with elements (nodes) drawn from each of the input graphs. Nodes are considered similar if their neighbors are also similar. An alternate, equivalent view of these network alignment algorithms is to consider the Kronecker product of the input graphs, and to identify high-ranked nodes in the Kronecker product graph. Conventional methods such as PageRank and HITS (Hypertext Induced Topic Selection) can be used for this purpose. These methods typically require computation of the principal eigenvector of a suitably modified Kronecker product matrix of the input graphs. We adopt this alternate view of the problem to address the problem of multiple network alignment. Using the phase estimation algorithm, we show that the multiple network alignment problem can be efficiently solved on quantum computers. We characterize the accuracy and performance of our method, and show that it can deliver exponential speedups over conventional (non-quantum) methods.

  13. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  14. Considerations for the extension of coherent optical processors into the quantum computing regime

    NASA Astrophysics Data System (ADS)

    Young, Rupert C. D.; Birch, Philip M.; Chatwin, Chris R.

    2016-04-01

    Previously we have examined the similarities of the quantum Fourier transform to the classical coherent optical implementation of the Fourier transform (R. Young et al, Proc SPIE Vol 87480, 874806-1, -11). In this paper, we further consider how superposition states can be generated on coherent optical wave fronts, potentially allowing coherent optical processing hardware architectures to be extended into the quantum computing regime. In particular, we propose placing the pixels of a Spatial Light Modulator (SLM) individually in a binary superposition state and illuminating them with a coherent wave front from a conventional (but low intensity) laser source in order to make a so-called `interaction free' measurement. In this way, the quantum object, i.e. the individual pixels of the SLM in their superposition states, and the illuminating wavefront would become entangled. We show that if this were possible, it would allow the extension of coherent processing architectures into the quantum computing regime and we give an example of such a processor configured to recover one of a known set of images encrypted using the well-known coherent optical processing technique of employing a random Fourier plane phase encryption mask which classically requires knowledge of the corresponding phase conjugate key to decrypt the image. A quantum optical computer would allow interrogation of all possible phase masks in parallel and so immediate decryption.

  15. Measurement-only topological quantum computation via anyonic interferometry

    SciTech Connect

    Bonderson, Parsa Freedman, Michael Nayak, Chetan

    2009-04-15

    We describe measurement-only topological quantum computation using both projective and interferometrical measurement of topological charge. We demonstrate how anyonic teleportation can be achieved using 'forced measurement' protocols for both types of measurement. Using this, it is shown how topological charge measurements can be used to generate the braiding transformations used in topological quantum computation, and hence that the physical transportation of computational anyons is unnecessary. We give a detailed discussion of the anyonics for implementation of topological quantum computation (particularly, using the measurement-only approach) in fractional quantum Hall systems.

  16. Examining the architecture of cellular computing through a comparative study with a computer.

    PubMed

    Wang, Degeng; Gribskov, Michael

    2005-06-22

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed. PMID:16849179

  17. A reconfigurable gate architecture for Si/SiGe quantum dots

    SciTech Connect

    Zajac, D. M.; Hazard, T. M.; Mi, X.; Wang, K.; Petta, J. R.

    2015-06-01

    We demonstrate a reconfigurable quantum dot gate architecture that incorporates two interchangeable transport channels. One channel is used to form quantum dots, and the other is used for charge sensing. The quantum dot transport channel can support either a single or a double quantum dot. We demonstrate few-electron occupation in a single quantum dot and extract charging energies as large as 6.6 meV. Magnetospectroscopy is used to measure valley splittings in the range of 35–70 μeV. By energizing two additional gates, we form a few-electron double quantum dot and demonstrate tunable tunnel coupling at the (1,0) to (0,1) interdot charge transition.

  18. Quantum computation of multifractal exponents through the quantum wavelet transform

    SciTech Connect

    Garcia-Mata, Ignacio; Giraud, Olivier; Georgeot, Bertrand

    2009-05-15

    We study the use of the quantum wavelet transform to extract efficiently information about the multifractal exponents for multifractal quantum states. We show that, combined with quantum simulation algorithms, it enables to build quantum algorithms for multifractal exponents with a polynomial gain compared to classical simulations. Numerical results indicate that a rough estimate of fractality could be obtained exponentially fast. Our findings are relevant, e.g., for quantum simulations of multifractal quantum maps and of the Anderson model at the metal-insulator transition.

  19. Computer Visualization of Many-Particle Quantum Dynamics

    SciTech Connect

    Ozhigov, A. Y.

    2009-03-10

    In this paper I show the importance of computer visualization in researching of many-particle quantum dynamics. Such a visualization becomes an indispensable illustrative tool for understanding the behavior of dynamic swarm-based quantum systems. It is also an important component of the corresponding simulation framework, and can simplify the studies of underlying algorithms for multi-particle quantum systems.

  20. Computer Visualization of Many-Particle Quantum Dynamics

    NASA Astrophysics Data System (ADS)

    Ozhigov, A. Y.

    2009-03-01

    In this paper I show the importance of computer visualization in researching of many-particle quantum dynamics. Such a visualization becomes an indispensable illustrative tool for understanding the behavior of dynamic swarm-based quantum systems. It is also an important component of the corresponding simulation framework, and can simplify the studies of underlying algorithms for multi-particle quantum systems.

  1. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  2. QCMPI: A parallel environment for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank; Juliá-Díaz, Bruno

    2009-06-01

    QCMPI is a quantum computer (QC) simulation package written in Fortran 90 with parallel processing capabilities. It is an accessible research tool that permits rapid evaluation of quantum algorithms for a large number of qubits and for various "noise" scenarios. The prime motivation for developing QCMPI is to facilitate numerical examination of not only how QC algorithms work, but also to include noise, decoherence, and attenuation effects and to evaluate the efficacy of error correction schemes. The present work builds on an earlier Mathematica code QDENSITY, which is mainly a pedagogic tool. In that earlier work, although the density matrix formulation was featured, the description using state vectors was also provided. In QCMPI, the stress is on state vectors, in order to employ a large number of qubits. The parallel processing feature is implemented by using the Message-Passing Interface (MPI) protocol. A description of how to spread the wave function components over many processors is provided, along with how to efficiently describe the action of general one- and two-qubit operators on these state vectors. These operators include the standard Pauli, Hadamard, CNOT and CPHASE gates and also Quantum Fourier transformation. These operators make up the actions needed in QC. Codes for Grover's search and Shor's factoring algorithms are provided as examples. A major feature of this work is that concurrent versions of the algorithms can be evaluated with each version subject to alternate noise effects, which corresponds to the idea of solving a stochastic Schrödinger equation. The density matrix for the ensemble of such noise cases is constructed using parallel distribution methods to evaluate its eigenvalues and associated entropy. Potential applications of this powerful tool include studies of the stability and correction of QC processes using Hamiltonian based dynamics. Program summaryProgram title: QCMPI Catalogue identifier: AECS_v1_0 Program summary URL

  3. Heterotic quantum and classical computing on convergence spaces

    NASA Astrophysics Data System (ADS)

    Patten, D. R.; Jakel, D. W.; Irwin, R. J.; Blair, H. A.

    2015-05-01

    Category-theoretic characterizations of heterotic models of computation, introduced by Stepney et al., combine computational models such as classical/quantum, digital/analog, synchronous/asynchronous, etc. to obtain increased computational power. A highly informative classical/quantum heterotic model of computation is represented by Abramsky's simple sequential imperative quantum programming language which extends the classical simple imperative programming language to encompass quantum computation. The mathematical (denotational) semantics of this classical language serves as a basic foundation upon which formal verification methods can be developed. We present a more comprehensive heterotic classical/quantum model of computation based on heterotic dynamical systems on convergence spaces. Convergence spaces subsume topological spaces but admit finer structure from which, in prior work, we obtained differential calculi in the cartesian closed category of convergence spaces allowing us to define heterotic dynamical systems, given by coupled systems of first order differential equations whose variables are functions from the reals to convergence spaces.

  4. Random matrix model of adiabatic quantum computing

    SciTech Connect

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-05-15

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.

  5. Earth Science Computational Architecture for Multi-disciplinary Investigations

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Blom, R.; Gurrola, E.; Katz, D.; Lyzenga, G.; Norton, C.

    2005-12-01

    Understanding the processes underlying Earth's deformation and mass transport requires a non-traditional, integrated, interdisciplinary, approach dependent on multiple space and ground based data sets, modeling, and computational tools. Currently, details of geophysical data acquisition, analysis, and modeling largely limit research to discipline domain experts. Interdisciplinary research requires a new computational architecture that is optimized to perform complex data processing of multiple solid Earth science data types in a user-friendly environment. A web-based computational framework is being developed and integrated with applications for automatic interferometric radar processing, and models for high-resolution deformation & gravity, forward models of viscoelastic mass loading over short wavelengths & complex time histories, forward-inverse codes for characterizing surface loading-response over time scales of days to tens of thousands of years, and inversion of combined space magnetic & gravity fields to constrain deep crustal and mantle properties. This framework combines an adaptation of the QuakeSim distributed services methodology with the Pyre framework for multiphysics development. The system uses a three-tier architecture, with a middle tier server that manages user projects, available resources, and security. This ensures scalability to very large networks of collaborators. Users log into a web page and have a personal project area, persistently maintained between connections, for each application. Upon selection of an application and host from a list of available entities, inputs may be uploaded or constructed from web forms and available data archives, including gravity, GPS and imaging radar data. The user is notified of job completion and directed to results posted via URLs. Interdisciplinary work is supported through easy availability of all applications via common browsers, application tutorials and reference guides, and worked examples with

  6. Trapped Ion Quantum Computation by Adiabatic Passage

    SciTech Connect

    Feng Xuni; Wu Chunfeng; Lai, C. H.; Oh, C. H.

    2008-11-07

    We propose a new universal quantum computation scheme for trapped ions in thermal motion via the technique of adiabatic passage, which incorporates the advantages of both the adiabatic passage and the model of trapped ions in thermal motion. Our scheme is immune from the decoherence due to spontaneous emission from excited states as the system in our scheme evolves along a dark state. In our scheme the vibrational degrees of freedom are not required to be cooled to their ground states because they are only virtually excited. It is shown that the fidelity of the resultant gate operation is still high even when the magnitude of the effective Rabi frequency moderately deviates from the desired value.

  7. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  8. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  9. A scheme for efficient quantum computation with linear optics

    NASA Astrophysics Data System (ADS)

    Knill, E.; Laflamme, R.; Milburn, G. J.

    2001-01-01

    Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

  10. Gate sequence for continuous variable one-way quantum computation

    PubMed Central

    Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi

    2013-01-01

    Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.

  11. Demonstration of measurement-only blind quantum computing

    NASA Astrophysics Data System (ADS)

    Greganti, Chiara; Roehsner, Marie-Christine; Barz, Stefanie; Morimae, Tomoyuki; Walther, Philip

    2016-01-01

    Blind quantum computing allows for secure cloud networks of quasi-classical clients and a fully fledged quantum server. Recently, a new protocol has been proposed, which requires a client to perform only measurements. We demonstrate a proof-of-principle implementation of this measurement-only blind quantum computing, exploiting a photonic setup to generate four-qubit cluster states for computation and verification. Feasible technological requirements for the client and the device-independent blindness make this scheme very applicable for future secure quantum networks.

  12. Enhanced Fault-Tolerant Quantum Computing in d -Level Systems

    NASA Astrophysics Data System (ADS)

    Campbell, Earl T.

    2014-12-01

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d -level qudit systems with prime d . The codes use n =d -1 qudits and can detect up to ˜d /3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d .

  13. Preparing ground states of quantum many-body systems on a quantum computer

    NASA Astrophysics Data System (ADS)

    Poulin, David

    2009-03-01

    The simulation of quantum many-body systems is a notoriously hard problem in condensed matter physics, but it could easily be handled by a quantum computer [4,1]. There is however one catch: while a quantum computer can naturally implement the dynamics of a quantum system --- i.e. solve Schr"odinger's equation --- there was until now no general method to initialize the computer in a low-energy state of the simulated system. We present a quantum algorithm [5] that can prepare the ground state and thermal states of a quantum many-body system in a time proportional to the square-root of its Hilbert space dimension. This is the same scaling as required by the best known algorithm to prepare the ground state of a classical many-body system on a quantum computer [3,2]. This provides strong evidence that for a quantum computer, preparing the ground state of a quantum system is in the worst case no more difficult than preparing the ground state of a classical system. 1 D. Aharonov and A. Ta-Shma, Adiabatic quantum state generation and statistical zero knowledge, Proc. 35th Annual ACM Symp. on Theo. Comp., (2003), p. 20. F. Barahona, On the computational complexity of ising spin glass models, J. Phys. A. Math. Gen., 15 (1982), p. 3241. C. H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani, Strengths and weaknessess of quantum computing, SIAM J. Comput., 26 (1997), pp. 1510--1523, quant-ph/9701001. S. Lloyd, Universal quantum simulators, Science, 273 (1996), pp. 1073--1078. D. Poulin and P. Wocjan, Preparing ground states of quantum many-body systems on a quantum computer, 2008, arXiv:0809.2705.

  14. Parallel Photonic Quantum Computation Assisted by Quantum Dots in One-Side Optical Microcavities

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing; Wang, Xiaojun

    2014-07-01

    Universal quantum logic gates are important elements for a quantum computer. In contrast to previous constructions on one degree of freedom (DOF) of quantum systems, we investigate the possibility of parallel quantum computations dependent on two DOFs of photon systems. We construct deterministic hyper-controlled-not (hyper-CNOT) gates operating on the spatial-mode and the polarization DOFs of two-photon or one-photon systems by exploring the giant optical circular birefringence induced by quantum-dot spins in one-sided optical microcavities. These hyper-CNOT gates show that the quantum states of two DOFs can be viewed as independent qubits without requiring auxiliary DOFs in theory. This result can reduce the quantum resources by half for quantum applications with large qubit systems, such as the quantum Shor algorithm.

  15. Parallel photonic quantum computation assisted by quantum dots in one-side optical microcavities.

    PubMed

    Luo, Ming-Xing; Wang, Xiaojun

    2014-01-01

    Universal quantum logic gates are important elements for a quantum computer. In contrast to previous constructions on one degree of freedom (DOF) of quantum systems, we investigate the possibility of parallel quantum computations dependent on two DOFs of photon systems. We construct deterministic hyper-controlled-not (hyper-CNOT) gates operating on the spatial-mode and the polarization DOFs of two-photon or one-photon systems by exploring the giant optical circular birefringence induced by quantum-dot spins in one-sided optical microcavities. These hyper-CNOT gates show that the quantum states of two DOFs can be viewed as independent qubits without requiring auxiliary DOFs in theory. This result can reduce the quantum resources by half for quantum applications with large qubit systems, such as the quantum Shor algorithm. PMID:25030424

  16. The Brain Is both Neurocomputer and Quantum Computer

    ERIC Educational Resources Information Center

    Hameroff, Stuart R.

    2007-01-01

    In their article, "Is the Brain a Quantum Computer,?" Litt, Eliasmith, Kroon, Weinstein, and Thagard (2006) criticize the Penrose-Hameroff "Orch OR" quantum computational model of consciousness, arguing instead for neurocomputation as an explanation for mental phenomena. Here I clarify and defend Orch OR, show how Orch OR and neurocomputation are…

  17. Scalable quantum computation via local control of only two qubits

    SciTech Connect

    Burgarth, Daniel; Maruyama, Koji; Murphy, Michael; Montangero, Simone; Calarco, Tommaso; Nori, Franco; Plenio, Martin B.

    2010-04-15

    We apply quantum control techniques to a long spin chain by acting only on two qubits at one of its ends, thereby implementing universal quantum computation by a combination of quantum gates on these qubits and indirect swap operations across the chain. It is shown that the control sequences can be computed and implemented efficiently. We discuss the application of these ideas to physical systems such as superconducting qubits in which full control of long chains is challenging.

  18. A quantum annealing architecture with all-to-all connectivity from local interactions.

    PubMed

    Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter

    2015-10-01

    Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is-in the spirit of topological quantum memories-redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316

  19. A quantum annealing architecture with all-to-all connectivity from local interactions

    PubMed Central

    Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter

    2015-01-01

    Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is—in the spirit of topological quantum memories—redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316

  20. Computational Modeling: From Remote Sensing to Quantum Computing

    NASA Astrophysics Data System (ADS)

    Healy, Dennis

    2001-03-01

    Recent DARPA investments have contributed to significant advances in numerically sound and computationally efficient physics-based modeling, enabling a wide variety of applications of critical interest to the DoD and Industry. Specific examples may be found in a wide variety of applications ranging from the design and operation of advanced synthetic aperture radar systems to the virtual integrated prototyping of reactors and control loops for the manufacture of thin-film functional material systems. This talk will survey the development and application of well-conditioned fast operators for particular physical problems and their critical contributions to various real world problems. We'll conclude with an indication of how these methods may contribute to exploring the revolutionary potential of quantum information theory.

  1. Large scale solution assembly of quantum dot-gold nanorod architectures with plasmon enhanced fluorescence.

    PubMed

    Nepal, Dhriti; Drummy, Lawrence F; Biswas, Sushmita; Park, Kyoungweon; Vaia, Richard A

    2013-10-22

    Tailoring the efficiency of fluorescent emission via plasmon-exciton coupling requires structure control on a nanometer length scale using a high-yield fabrication route not achievable with current lithographic techniques. These systems can be fabricated using a bottom-up approach if problems of colloidal stability and low yield can be addressed. We report progress on this pathway with the assembly of quantum dots (emitter) on gold nanorods (plasmonic units) with precisely controlled spacing, quantum dot/nanorod ratio, and long-term colloidal stability, which enables the purification and encapsulation of the assembled architecture in a protective silica shell. Overall, such controllability with nanometer precision allows one to synthesize stable, complex architectures at large volume in a rational and controllable manner. The assembled architectures demonstrate photoluminescent enhancement (5×) useful for applications ranging from biological sensing to advanced optical communication. PMID:24004164

  2. Program partitioning and scheduling for NUMA computer architectures

    SciTech Connect

    Wolski, R.M.

    1994-03-01

    To effect the parallel execution of a program on a multiprocessor, each of the program`s constituent computations must be assigned to a processing resource within the multiprocessor. The problem of making this assignment so that execution time is minimized (known as the mapping problem) has been shown to be NP-complete. However, heuristics based on the performance characteristics of the target multiprocessor can yield execution times that approach the minimum possible. The mapping problem can be divided in to the problem of partitioning the computations into sequential threads, and the problem of scheduling those threads on the processors of the target system. This dissertation presents a logical framework and a set of heuristics that operate within the framework for the automatic partitioning and scheduling of programs at compile-time. The framework is based on the memory-node execution model which correctly captures the interaction between computations, processors, and the communication resources within a multiprocessor. The CP and HEF heuristics manipulate the features of the memory-node model to produce efficient program mappings. The effectiveness of the partitioning and scheduling techniques is investigated for Non-uniform Memory Access (NUMA) architecture types. To test the versatility of the approach, results are presented both for processors implementing strict execution semantics, and non-strict load/store semantics popular with RISC systems. The partitioner and scheduler are also used to investigate the possible advantages of multithreading (using either hardware or software), and the effectiveness of massively parallel systems, within a scientific programming context.

  3. Demonstration of a small programmable quantum computer with atomic qubits.

    PubMed

    Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C

    2016-08-01

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels. PMID:27488798

  4. Demonstration of a small programmable quantum computer with atomic qubits

    NASA Astrophysics Data System (ADS)

    Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.

    2016-08-01

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch–Jozsa and Bernstein–Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  5. Charge-transfer dynamics in multilayered PbS and PbSe quantum dot architectures

    SciTech Connect

    Xu, F.; Ma, X.; Haughn, C. R.; Doty, M. F.; Cloutier, S. G.

    2014-02-03

    We demonstrate control of the charge transfer process in PbS and PbSe quantum dot assemblies. We first demonstrate efficient charge transfer from donor quantum dots to acceptor quantum dots in a multi-layer PbSe cascade structure. Then, we assemble type-I and type-II heterostructures using both PbS and PbSe quantum dots via careful control of the band alignment. In type-I structures, photo-generated carriers are transferred and localized in the smaller bandgap (acceptor) quantum dots, resulting in a significant luminescence enhancement. In contrast, a significant luminescence quenching and shorter emission lifetime confirms an efficient separation of photo-generated carriers in the type-II architecture.

  6. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    NASA Astrophysics Data System (ADS)

    Broadbent, Anne

    2016-08-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.

  7. Cluster State Quantum Computation and the Repeat-Until Scheme

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.

    Cluster state computation or the one way quantum computation (1WQC) relies on an initially highly entangled state (called a cluster state) and an appropriate sequence of single qubit measurements along different directions, together with feed-forward based on the measurement results, to realize a quantum computation process. The final result of the computation is obtained by measuring the last remaining qubits in the computational basis. In this short tutorial on cluster state quantum computation, we will also describe the basic ideas of a cluster state and proceed to describe how a single qubit operation can be done on a cluster state. Recently, we proposed a repeat-until-success (RUS) scheme that could effectively be used to realize one-way quantum computer on a hybrid system of photons and atoms. We will briefly describe this RUS scheme and show how it can be used to entangled two distant stationary qubits.

  8. Secure Multiparty Quantum Computation for Summation and Multiplication

    NASA Astrophysics Data System (ADS)

    Shi, Run-Hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.

  9. The Power of Qutrit Logic for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing; Ma, Song-Ya; Chen, Xiu-Bo; Yang, Yi-Xian

    2013-08-01

    The critical merits acquired from quantum computation require running in parallel, which cannot be benefited from previous multi-level extensions and are exact our purposes. In this paper, with qutrit subsystems the general quantum computation further reduces into qutrit gates or its controlled operations. This extension plays parallizable and integrable with same construction independent of the qutrit numbers. The qutrit swapping as its basic operations for controlling can be integrated into quantum computers with present physical techniques. Our generalizations are free of elevating the system spaces, and feasible for the universal computation.

  10. Secure Multiparty Quantum Computation for Summation and Multiplication

    PubMed Central

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197

  11. Secure Multiparty Quantum Computation for Summation and Multiplication.

    PubMed

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197

  12. Computable measure of total quantum correlations of multipartite systems

    NASA Astrophysics Data System (ADS)

    Behdani, Javad; Akhtarshenas, Seyed Javad; Sarbishaei, Mohsen

    2016-04-01

    Quantum discord as a measure of the quantum correlations cannot be easily computed for most of density operators. In this paper, we present a measure of the total quantum correlations that is operationally simple and can be computed effectively for an arbitrary mixed state of a multipartite system. The measure is based on the coherence vector of the party whose quantumness is investigated as well as the correlation matrix of this part with the remainder of the system. Being able to detect the quantumness of multipartite systems, such as detecting the quantum critical points in spin chains, alongside with the computability characteristic of the measure, makes it a useful indicator to be exploited in the cases which are out of the scope of the other known measures.

  13. Noisy one-way quantum computations: The role of correlations

    SciTech Connect

    Chaves, Rafael; Melo, Fernando de

    2011-08-15

    A scheme to evaluate computation fidelities within the one-way model is developed and explored to understand the role of correlations in the quality of noisy quantum computations. The formalism is promptly applied to many computation instances and unveils that a higher amount of entanglement in the noisy resource state does not necessarily imply a better computation.

  14. Quantum computing with acceptor spins in silicon

    NASA Astrophysics Data System (ADS)

    Salfi, Joe; Tong, Mengyang; Rogge, Sven; Culcer, Dimitrie

    2016-06-01

    The states of a boron acceptor near a Si/SiO2 interface, which bind two low-energy Kramers pairs, have exceptional properties for encoding quantum information and, with the aid of strain, both heavy hole and light hole-based spin qubits can be designed. Whereas a light-hole spin qubit was introduced recently (arXiv:1508.04259), here we present analytical and numerical results proving that a heavy-hole spin qubit can be reliably initialised, rotated and entangled by electrical means alone. This is due to strong Rashba-like spin–orbit interaction terms enabled by the interface inversion asymmetry. Single qubit rotations rely on electric-dipole spin resonance (EDSR), which is strongly enhanced by interface-induced spin–orbit terms. Entanglement can be accomplished by Coulomb exchange, coupling to a resonator, or spin–orbit induced dipole–dipole interactions. By analysing the qubit sensitivity to charge noise, we demonstrate that interface-induced spin–orbit terms are responsible for sweet spots in the dephasing time {T}2* as a function of the top gate electric field, which are close to maxima in the EDSR strength, where the EDSR gate has high fidelity. We show that both qubits can be described using the same starting Hamiltonian, and by comparing their properties we show that the complex interplay of bulk and interface-induced spin–orbit terms allows a high degree of electrical control and makes acceptors potential candidates for scalable quantum computation in Si.

  15. Architectures of Kepler Planet Systems with Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.; Ford, Eric B.

    2015-12-01

    The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.

  16. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  17. PHENIX On-Line Distributed Computing System Architecture

    SciTech Connect

    Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas

    1997-05-22

    PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (``granules``) that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes.

  18. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    NASA Astrophysics Data System (ADS)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-07-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi–Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources.

  19. Repeat-until-success quantum computing using stationary and flying qubits

    NASA Astrophysics Data System (ADS)

    Lim, Yuan Liang; Barrett, Sean D.; Beige, Almut; Kok, Pieter; Kwek, Leong Chuan

    2006-01-01

    We introduce an architecture for robust and scalable quantum computation using both stationary qubits (e.g., single photon sources made out of trapped atoms, molecules, ions, quantum dots, or defect centers in solids) and flying qubits (e.g., photons). Our scheme solves some of the most pressing problems in existing nonhybrid proposals, which include the difficulty of scaling conventional stationary qubit approaches, and the lack of practical means for storing single photons in linear optics setups. We combine elements of two previous proposals for distributed quantum computing, namely the efficient photon-loss tolerant build up of cluster states by Barrett and Kok [Phys. Rev. A 71, 060310(R) (2005)] with the idea of repeat-until-success (RUS) quantum computing by Lim [Phys. Rev. Lett. 95, 030505 (2005)]. This idea can be used to perform eventually deterministic two qubit logic gates on spatially separated stationary qubits via photon pair measurements. Under nonideal conditions, where photon loss is a possibility, the resulting gates can still be used to build graph states for one-way quantum computing. In this paper, we describe the RUS method, present possible experimental realizations, and analyze the generation of graph states.

  20. Digitized adiabatic quantum computing with a superconducting circuit.

    PubMed

    Barends, R; Shabani, A; Lamata, L; Kelly, J; Mezzacapo, A; Las Heras, U; Babbush, R; Fowler, A G; Campbell, B; Chen, Yu; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Lucero, E; Megrant, A; Mutus, J Y; Neeley, M; Neill, C; O'Malley, P J J; Quintana, C; Roushan, P; Sank, D; Vainsencher, A; Wenner, J; White, T C; Solano, E; Neven, H; Martinis, John M

    2016-06-01

    Quantum mechanics can help to solve complex problems in physics and chemistry, provided they can be programmed in a physical device. In adiabatic quantum computing, a system is slowly evolved from the ground state of a simple initial Hamiltonian to a final Hamiltonian that encodes a computational problem. The appeal of this approach lies in the combination of simplicity and generality; in principle, any problem can be encoded. In practice, applications are restricted by limited connectivity, available interactions and noise. A complementary approach is digital quantum computing, which enables the construction of arbitrary interactions and is compatible with error correction, but uses quantum circuit algorithms that are problem-specific. Here we combine the advantages of both approaches by implementing digitized adiabatic quantum computing in a superconducting system. We tomographically probe the system during the digitized evolution and explore the scaling of errors with system size. We then let the full system find the solution to random instances of the one-dimensional Ising problem as well as problem Hamiltonians that involve more complex interactions. This digital quantum simulation of the adiabatic algorithm consists of up to nine qubits and up to 1,000 quantum logic gates. The demonstration of digitized adiabatic quantum computing in the solid state opens a path to synthesizing long-range correlations and solving complex computational problems. When combined with fault-tolerance, our approach becomes a general-purpose algorithm that is scalable. PMID:27279216

  1. Digitized adiabatic quantum computing with a superconducting circuit

    NASA Astrophysics Data System (ADS)

    Barends, R.; Shabani, A.; Lamata, L.; Kelly, J.; Mezzacapo, A.; Heras, U. Las; Babbush, R.; Fowler, A. G.; Campbell, B.; Chen, Yu; Chen, Z.; Chiaro, B.; Dunsworth, A.; Jeffrey, E.; Lucero, E.; Megrant, A.; Mutus, J. Y.; Neeley, M.; Neill, C.; O’Malley, P. J. J.; Quintana, C.; Roushan, P.; Sank, D.; Vainsencher, A.; Wenner, J.; White, T. C.; Solano, E.; Neven, H.; Martinis, John M.

    2016-06-01

    Quantum mechanics can help to solve complex problems in physics and chemistry, provided they can be programmed in a physical device. In adiabatic quantum computing, a system is slowly evolved from the ground state of a simple initial Hamiltonian to a final Hamiltonian that encodes a computational problem. The appeal of this approach lies in the combination of simplicity and generality; in principle, any problem can be encoded. In practice, applications are restricted by limited connectivity, available interactions and noise. A complementary approach is digital quantum computing, which enables the construction of arbitrary interactions and is compatible with error correction, but uses quantum circuit algorithms that are problem-specific. Here we combine the advantages of both approaches by implementing digitized adiabatic quantum computing in a superconducting system. We tomographically probe the system during the digitized evolution and explore the scaling of errors with system size. We then let the full system find the solution to random instances of the one-dimensional Ising problem as well as problem Hamiltonians that involve more complex interactions. This digital quantum simulation of the adiabatic algorithm consists of up to nine qubits and up to 1,000 quantum logic gates. The demonstration of digitized adiabatic quantum computing in the solid state opens a path to synthesizing long-range correlations and solving complex computational problems. When combined with fault-tolerance, our approach becomes a general-purpose algorithm that is scalable.

  2. Holonomic quantum computation on microwave photons with all resonant interactions

    NASA Astrophysics Data System (ADS)

    Dong, Ping; Yu, Long-Bao; Zhou, Jian

    2016-08-01

    The intrinsic difficulties of holonomic quantum computation on superconducting circuits are originated from the use of three levels in superconducting transmon qubits and the complicated dispersive interaction between them. Due to the limited anharmonicity of transmon qubits, the experimental realization seems to be very challenging. However, with recent experimental progress, coherent control over microwave photons in superconducting circuit cavities is well achieved, and thus provides a promising platform for quantum information processing with photonic qubits. Here, with all resonant inter-cavity photon–photon interactions, we propose a scheme for implementing scalable holonomic quantum computation on a circuit QED lattice. In our proposal, three cavities, connected by a SQUID, are used to encode a logical qubit. By tuning the inter-cavity photon–photon interaction, we can construct all the holonomies needed for universal quantum computation in a non-adiabatic way. Therefore, our scheme presents a promising alternative for robust quantum computation with microwave photons.

  3. Universal quantum computation with a nonlinear oscillator network

    NASA Astrophysics Data System (ADS)

    Goto, Hayato

    2016-05-01

    We theoretically show that a nonlinear oscillator network with controllable parameters can be used for universal quantum computation. The initialization is achieved by a quantum-mechanical bifurcation based on quantum adiabatic evolution, which yields a Schrödinger cat state. All the elementary quantum gates are also achieved by quantum adiabatic evolution, in which dynamical phases accompanying the adiabatic evolutions are controlled by the system parameters. Numerical simulation results indicate that high gate fidelities can be achieved, where no dissipation is assumed.

  4. Quantum Nondeterministic Computation based on Statistics Superselection Rules

    NASA Astrophysics Data System (ADS)

    Castagnoli, G.

    Quantum states which obey certain symmetry superselection rules under identical particles permutation can be interpreted as computational states satisfying corresponding Boolean predicates. Given the NP-complete problem of testing the satisfiability of a generic Boolean predicate P, we investigate the possibility of achieving quantum nondeterministic computation by deriving, from P, a physical situation in which the computational states satisfy P iff they satisfy a special fermion statistics.

  5. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer.

    PubMed

    Martinez, Esteban A; Muschik, Christine A; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer

    2016-06-23

    Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman's idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle-antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments-the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories. PMID:27337339

  6. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer

    NASA Astrophysics Data System (ADS)

    Martinez, Esteban A.; Muschik, Christine A.; Schindler, Philipp; Nigg, Daniel; Erhard, Alexander; Heyl, Markus; Hauke, Philipp; Dalmonte, Marcello; Monz, Thomas; Zoller, Peter; Blatt, Rainer

    2016-06-01

    Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron–positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture. We explore the Schwinger mechanism of particle–antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.

  7. Development of The Fundamental Components of A Superconducting Qubit Quantum Computer

    NASA Astrophysics Data System (ADS)

    Bialczak, Radoslaw Radek Cezary

    Superconducting qubits have emerged as a promising architecture for building a scalable quantum computer. In this thesis we use a particular type of superconducting qubit architecture, the flux-biased phase qubit, to build and characterize the fundamental components of a quantum computer: universal quantum gates and a scalable qubit coupling architecture. A universal quantum gate allows for the construction of any arbitrary quantum computing operations, and is the analog of classical universal logic gates like the NAND gate. We build this gate using a pair of coupled flux-biased phase qubits where the coupling magnitude is fixed. We characterize this coupled qubit system and show how to construct the gate from the Hamiltonian of this two-qubit system. The universal quantum gate must also be characterized to verify that it has been constructed properly. However, to completely characterize a quantum gate, its output must be mapped out for any arbitrary input. Due to the infinite Hilbert space of qubits, such a characterization is more involved than simply obtaining a truth table, as would be done for classical computational logic. To achieve a complete characterization of a quantum gate we use a technique called quantum process tomography (QPT). We perform QPT on our universal gate, the "square-root of i-swap" gate, and for the first time in any solid state qubit architecture we completely characterize a universal quantum gate. As a result of this gate characterization, we discover that our gate performance is limited by qubit dephasing times. We are also able to measure noise correlations in the coupled qubit system using QPT.We find that by increasing the coupling strength between the qubits, we can build faster gates. This lets us get around the limits imposed by dephasing times by increasing the speed at which we can execute our universal gate. However, increasing the coupling strength of our fixed coupling scheme leads to increased errors during single qubit

  8. Symbolic Quantum Computation Simulation in SymPy

    NASA Astrophysics Data System (ADS)

    Cugini, Addison; Curry, Matt; Granger, Brian

    2010-10-01

    Quantum computing is an emerging field which aims to use quantum mechanics to solve difficult computational problems with greater efficiency than on a classical computer. There is a need to create software that i) helps newcomers to learn the field, ii) enables practitioners to design and simulate quantum circuits and iii) provides an open foundation for further research in the field. Towards these ends we have created a package, in the open-source symbolic computation library SymPy, that simulates the quantum circuit model of quantum computation using Dirac notation. This framework builds on the extant powerful symbolic capabilities of SymPy to preform its simulations in a fully symbolic manner. We use object oriented design to abstract circuits as ordered collections of quantum gate and qbit objects. The gate objects can either be applied directly to the qbit objects or be represented as matrices in different bases. The package is also capable of performing the quantum Fourier transform and Shor's algorithm. A notion of measurement is made possible through the use of a non-commutative gate object. In this talk, we describe the software and show examples of quantum circuits on single and multi qbit states that involve common algorithms, gates and measurements.

  9. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  10. General purpose architecture for intelligent computer-aided training

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen (Inventor); Wang, Lui (Inventor); Baffes, Paul T. (Inventor); Hua, Grace C. (Inventor)

    1994-01-01

    An intelligent computer-aided training system having a general modular architecture is provided for use in a wide variety of training tasks and environments. It is comprised of a user interface which permits the trainee to access the same information available in the task environment and serves as a means for the trainee to assert actions to the system; a domain expert which is sufficiently intelligent to use the same information available to the trainee and carry out the task assigned to the trainee; a training session manager for examining the assertions made by the domain expert and by the trainee for evaluating such trainee assertions and providing guidance to the trainee which are appropriate to his acquired skill level; a trainee model which contains a history of the trainee interactions with the system together with summary evaluative data; an intelligent training scenario generator for designing increasingly complex training exercises based on the current skill level contained in the trainee model and on any weaknesses or deficiencies that the trainee has exhibited in previous interactions; and a blackboard that provides a common fact base for communication between the other components of the system. Preferably, the domain expert contains a list of 'mal-rules' which typifies errors that are usually made by novice trainees. Also preferably, the training session manager comprises an intelligent error detection means and an intelligent error handling means. The present invention utilizes a rule-based language having a control structure whereby a specific message passing protocol is utilized with respect to tasks which are procedural or step-by-step in structure. The rules can be activated by the trainee in any order to reach the solution by any valid or correct path.

  11. Quantum computing with photons: introduction to the circuit model, the one-way quantum computer, and the fundamental principles of photonic experiments

    NASA Astrophysics Data System (ADS)

    Barz, Stefanie

    2015-04-01

    Quantum physics has revolutionized our understanding of information processing and enables computational speed-ups that are unattainable using classical computers. This tutorial reviews the fundamental tools of photonic quantum information processing. The basics of theoretical quantum computing are presented and the quantum circuit model as well as measurement-based models of quantum computing are introduced. Furthermore, it is shown how these concepts can be implemented experimentally using photonic qubits, where information is encoded in the photons’ polarization.

  12. Automatic computation of quantum-mechanical bound states and wavefunctions

    NASA Astrophysics Data System (ADS)

    Ledoux, V.; Van Daele, M.

    2013-04-01

    We discuss the automatic solution of the multichannel Schrödinger equation. The proposed approach is based on the use of a CP method for which the step size is not restricted by the oscillations in the solution. Moreover, this CP method turns out to form a natural scheme for the integration of the Riccati differential equation which arises when introducing the (inverse) logarithmic derivative. A new Prüfer type mechanism which derives all the required information from the propagation of the inverse of the log-derivative, is introduced. It improves and refines the eigenvalue shooting process and implies that the user may specify the required eigenvalue by its index. Catalogue identifier: AEON_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEON_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/license/license.html No. of lines in distributed program, including test data, etc.: 3822 No. of bytes in distributed program, including test data, etc.: 119814 Distribution format: tar.gz Programming language: Matlab. Computer: Personal computer architectures. Operating system: Windows, Linux, Mac (all systems on which Matlab can be installed). RAM: Depends on the problem size. Classification: 4.3. Nature of problem: Computation of eigenvalues and eigenfunctions of multichannel Schrödinger equations appearing in quantum mechanics. Solution method: A CP-based propagation scheme is used to advance the R-matrix in a shooting process. The shooting algorithm is supplemented by a Prüfer type mechanism which allows the eigenvalues to be computed according to index: the user specifies an integer k≥0, and the code computes an approximation to the kth eigenvalue. Eigenfunctions are also available through an auxiliary routine, called after the eigenvalue has been determined. Restrictions: The program can only deal with non-singular problems. Additional

  13. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  14. Protecting software agents from malicious hosts using quantum computing

    NASA Astrophysics Data System (ADS)

    Reisner, John; Donkor, Eric

    2000-07-01

    We evaluate how quantum computing can be applied to security problems for software agents. Agent-based computing, which merges technological advances in artificial intelligence and mobile computing, is a rapidly growing domain, especially in applications such as electronic commerce, network management, information retrieval, and mission planning. System security is one of the more eminent research areas in agent-based computing, and the specific problem of protecting a mobile agent from a potentially hostile host is one of the most difficult of these challenges. In this work, we describe our agent model, and discuss the capabilities and limitations of classical solutions to the malicious host problem. Quantum computing may be extremely helpful in addressing the limitations of classical solutions to this problem. This paper highlights some of the areas where quantum computing could be applied to agent security.

  15. Spin-based all-optical quantum computation with quantum dots: Understanding and suppressing decoherence

    SciTech Connect

    Calarco, T.; Datta, A.; Fedichev, P.; Zoller, P.; Pazy, E.

    2003-07-01

    We present an all-optical implementation of quantum computation using semiconductor quantum dots. Quantum memory is represented by the spin of an excess electron stored in each dot. Two-qubit gates are realized by switching on trion-trion interactions between different dots. State selectivity is achieved via conditional laser excitation exploiting Pauli exclusion principle. Read out is performed via a quantum-jump technique. We analyze the effect on our scheme's performance of the main imperfections present in real quantum dots: exciton decay, hole mixing, and phonon decoherence. We introduce an adiabatic gate procedure that allows one to circumvent these effects and evaluate quantitatively its fidelity.

  16. Time independent universal computing with spin chains: quantum plinko machine

    NASA Astrophysics Data System (ADS)

    Thompson, K. F.; Gokler, C.; Lloyd, S.; Shor, P. W.

    2016-07-01

    We present a scheme for universal quantum computing using XY Heisenberg spin chains. Information is encoded into packets propagating down these chains, and they interact with each other to perform universal quantum computation. A circuit using g gate blocks on m qubits can be encoded into chains of length O({g}3+δ {m}3+δ ) for all δ \\gt 0 with vanishingly small error.

  17. Influence of the Inner-Shell Architecture on Quantum Yield and Blinking Dynamics in Core/Multishell Quantum Dots.

    PubMed

    Bajwa, Pooja; Gao, Feng; Nguyen, Anh; Omogo, Benard; Heyes, Colin D

    2016-03-01

    Choosing the composition of a shell for QDs is not trivial, as both the band-edge energy offset and interfacial lattice mismatch influence the final optical properties. One way to balance these competing effects is by forming multishells and/or gradient-alloy shells. However, this introduces multiple interfaces, and their relative effects on quantum yield and blinking are not yet fully understood. Here, we undertake a systematic, comparative study of the addition of inner shells of a single component versus gradient-alloy shells of cadmium/zinc chalogenides onto CdSe cores, and then capping with a thin ZnS outer shell to form various core/multishell configurations. We show that architecture of the inner shell between the CdSe core and the outer ZnS shell significantly influences both the quantum yield and blinking dynamics, but that these effects are not correlated-a high ensemble quantum yield doesn't necessarily equate to reduced blinking. Two mathematical models have been proposed to describe the blinking dynamics-the more common power-law model and a more recent multiexponential model. By binning the same data with 1 and 20 ms resolution, we show that the on times can be better described by the multiexponential model, whereas the off times can be better described by the power-law model. We discuss physical mechanisms that might explain this behavior and how it can be affected by the inner-shell architecture. PMID:26693950

  18. Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.

    ERIC Educational Resources Information Center

    Beltrametti, Monica; English, Will

    1994-01-01

    Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…

  19. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    PubMed

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed. PMID:26364202

  20. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-07-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  1. Universal linear Bogoliubov transformations through one-way quantum computation

    SciTech Connect

    Ukai, Ryuji; Yoshikawa, Jun-ichi; Iwata, Noriaki; Furusawa, Akira; Loock, Peter van

    2010-03-15

    We show explicitly how to realize an arbitrary linear unitary Bogoliubov (LUBO) transformation on a multimode quantum state through homodyne-based one-way quantum computation. Any LUBO transformation can be approximated by means of a fixed, finite-sized, sufficiently squeezed Gaussian cluster state that allows for the implementation of beam splitters (in form of three-mode connection gates) and general one-mode LUBO transformations. In particular, we demonstrate that a linear four-mode cluster state is a sufficient resource for an arbitrary one-mode LUBO transformation. Arbitrary-input quantum states including non-Gaussian states could be efficiently attached to the cluster through quantum teleportation.

  2. The geometric approach to quantum correlations: computability versus reliability

    NASA Astrophysics Data System (ADS)

    Tufarelli, Tommaso; MacLean, Tom; Girolami, Davide; Vasile, Ruggero; Adesso, Gerardo

    2013-07-01

    We propose a modified metric based on the Hilbert-Schmidt norm and adopt it to define a rescaled version of the geometric measure of quantum discord. Such a measure is found not to suffer from pathological dependence on state purity. Although the employed metric is still non-contractive under quantum operations, we show that the resulting indicator of quantum correlations is in agreement with other bona fide discord measures in a number of physical examples. We present a critical assessment of the requirements of reliability versus computability when approaching the task of quantifying, or measuring, general quantum correlations in a bipartite state.

  3. Continuous-Variable Quantum Computation of Oracle Decision Problems

    NASA Astrophysics Data System (ADS)

    Adcock, Mark R. A.

    Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. In the infinite-dimensional case, we study continuous-variable quantum algorithms for the solution of the Deutsch--Jozsa oracle decision problem implemented within a single harmonic-oscillator. Orthogonal states are used as the computational bases, and we show that, contrary to a previous claim in the literature, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform. We further demonstrate that orthogonal encoding bases are not unique, and using the coherent states of the harmonic oscillator as the computational bases, our formalism enables quantifying

  4. Spin Glass a Bridge Between Quantum Computation and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Ohzeki, Masayuki

    2013-09-01

    In this chapter, we show two fascinating topics lying between quantum information processing and statistical mechanics. First, we introduce an elaborated technique, the surface code, to prepare the particular quantum state with robustness against decoherence. Interestingly, the theoretical limitation of the surface code, accuracy threshold, to restore the quantum state has a close connection with the problem on the phase transition in a special model known as spin glasses, which is one of the most active researches in statistical mechanics. The phase transition in spin glasses is an intractable problem, since we must strive many-body system with complicated interactions with change of their signs depending on the distance between spins. Fortunately, recent progress in spin-glass theory enables us to predict the precise location of the critical point, at which the phase transition occurs. It means that statistical mechanics is available for revealing one of the most interesting parts in quantum information processing. We show how to import the special tool in statistical mechanics into the problem on the accuracy threshold in quantum computation. Second, we show another interesting technique to employ quantum nature, quantum annealing. The purpose of quantum annealing is to search for the most favored solution of a multivariable function, namely optimization problem. The most typical instance is the traveling salesman problem to find the minimum tour while visiting all the cities. In quantum annealing, we introduce quantum fluctuation to drive a particular system with the artificial Hamiltonian, in which the ground state represents the optimal solution of the specific problem we desire to solve. Induction of the quantum fluctuation gives rise to the quantum tunneling effect, which allows nontrivial hopping from state to state. We then sketch a strategy to control the quantum fluctuation efficiently reaching the ground state. Such a generic framework is called

  5. Conduction pathways in microtubules, biological quantum computation, and consciousness.

    PubMed

    Hameroff, Stuart; Nip, Alex; Porter, Mitchell; Tuszynski, Jack

    2002-01-01

    Technological computation is entering the quantum realm, focusing attention on biomolecular information processing systems such as proteins, as presaged by the work of Michael Conrad. Protein conformational dynamics and pharmacological evidence suggest that protein conformational states-fundamental information units ('bits') in biological systems-are governed by quantum events, and are thus perhaps akin to quantum bits ('qubits') as utilized in quantum computation. 'Real time' dynamic activities within cells are regulated by the cell cytoskeleton, particularly microtubules (MTs) which are cylindrical lattice polymers of the protein tubulin. Recent evidence shows signaling, communication and conductivity in MTs, and theoretical models have predicted both classical and quantum information processing in MTs. In this paper we show conduction pathways for electron mobility and possible quantum tunneling and superconductivity among aromatic amino acids in tubulins. The pathways within tubulin match helical patterns in the microtubule lattice structure, which lend themselves to topological quantum effects resistant to decoherence. The Penrose-Hameroff 'Orch OR' model of consciousness is reviewed as an example of the possible utility of quantum computation in MTs. PMID:11755497

  6. QDENSITY/QCWAVE: A Mathematica quantum computer simulation update

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank

    2016-04-01

    The Mathematica quantum computer simulation packages QDENSITY and QCWAVE are updated for Mathematica 9-10.3. An overview is given of the new QDensity, QCWave, BTSystem and Circuits packages, which includes: (1) improved treatment of tensor products of states and density matrices, (2) major extension to include qutrit (triplet), as well as qubit (binary) and hybrid qubit/qutrit systems in the associated BTSystem package, (3) updated sample quantum computation algorithms, (4) entanglement studies, including Schmidt decomposition, entropy, mutual information, partial transposition, and calculation of the quantum discord. Examples of Bell's theorem and concurrence are also included. This update will hopefully aid in studies of QC dynamics.

  7. Exponential rise of dynamical complexity in quantum computing through projections

    PubMed Central

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-01-01

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once ‘observed’ as outlined above. Conversely, we show that any complex quantum dynamics can be ‘purified’ into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics. PMID:25300692

  8. Circuit-QED-based scalable architectures for quantum information processing with superconducting qubits

    NASA Astrophysics Data System (ADS)

    Billangeon, P.-M.; Tsai, J. S.; Nakamura, Y.

    2015-03-01

    We discuss different ways of generating entanglement in the original picture of circuit QED (XcQED) and several restrictions that arise in the context of a large-scale quantum architecture. To alleviate some of the issues posed by the presence of the nonlinearities inherent to these systems, we introduce a layout for circuit QED, wherein an artificial atom is coupled to a quantized radiation field via its longitudinal degree of freedom (ZcQED). This system is akin to ion traps used in atomic physics, but it relies on fixed coupling between the atom and the resonator. We describe a scalable architecture for processing quantum information with superconducting qubits, which is free from any type of residual interaction between the atomic and photonic degrees of freedom. Tunable interactions can be realized based on sideband transitions, and the system can be operated out of the Lamb-Dicke regime, allowing it to benefit from the possibility of achieving large coupling strengths between atoms and resonators. We also discuss a readout scheme that does not require any extra circuits and allows a qubit-specific measurement of the state of the quantum register inspired by the electron shelving technique. This scheme is quantum nondemolition (QND)-like, and allows for single-shot determination of the qubit states.

  9. Quantum Computation Based on Photons with Three Degrees of Freedom.

    PubMed

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun

    2016-01-01

    Quantum systems are important resources for quantum computer. Different from previous encoding forms using quantum systems with one degree of freedom (DoF) or two DoFs, we investigate the possibility of photon systems encoding with three DoFs consisting of the polarization DoF and two spatial DoFs. By exploring the optical circular birefringence induced by an NV center in a diamond embedded in the photonic crystal cavity, we propose several hybrid controlled-NOT (hybrid CNOT) gates operating on the two-photon or one-photon system. These hybrid CNOT gates show that three DoFs may be encoded as independent qubits without auxiliary DoFs. Our result provides a useful way to reduce quantum simulation resources by exploring complex quantum systems for quantum applications requiring large qubit systems. PMID:27174302

  10. Quantum Computation Based on Photons with Three Degrees of Freedom

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun

    2016-05-01

    Quantum systems are important resources for quantum computer. Different from previous encoding forms using quantum systems with one degree of freedom (DoF) or two DoFs, we investigate the possibility of photon systems encoding with three DoFs consisting of the polarization DoF and two spatial DoFs. By exploring the optical circular birefringence induced by an NV center in a diamond embedded in the photonic crystal cavity, we propose several hybrid controlled-NOT (hybrid CNOT) gates operating on the two-photon or one-photon system. These hybrid CNOT gates show that three DoFs may be encoded as independent qubits without auxiliary DoFs. Our result provides a useful way to reduce quantum simulation resources by exploring complex quantum systems for quantum applications requiring large qubit systems.

  11. Quantum Computation Based on Photons with Three Degrees of Freedom

    PubMed Central

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun

    2016-01-01

    Quantum systems are important resources for quantum computer. Different from previous encoding forms using quantum systems with one degree of freedom (DoF) or two DoFs, we investigate the possibility of photon systems encoding with three DoFs consisting of the polarization DoF and two spatial DoFs. By exploring the optical circular birefringence induced by an NV center in a diamond embedded in the photonic crystal cavity, we propose several hybrid controlled-NOT (hybrid CNOT) gates operating on the two-photon or one-photon system. These hybrid CNOT gates show that three DoFs may be encoded as independent qubits without auxiliary DoFs. Our result provides a useful way to reduce quantum simulation resources by exploring complex quantum systems for quantum applications requiring large qubit systems. PMID:27174302

  12. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    ERIC Educational Resources Information Center

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  13. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  14. Blind quantum computation over a collective-noise channel

    NASA Astrophysics Data System (ADS)

    Takeuchi, Yuki; Fujii, Keisuke; Ikuta, Rikizo; Yamamoto, Takashi; Imoto, Nobuyuki

    2016-05-01

    Blind quantum computation (BQC) allows a client (Alice), who only possesses relatively poor quantum devices, to delegate universal quantum computation to a server (Bob) in such a way that Bob cannot know Alice's inputs, algorithm, and outputs. The quantum channel between Alice and Bob is noisy, and the loss over the long-distance quantum communication should also be taken into account. Here we propose to use decoherence-free subspace (DFS) to overcome the collective noise in the quantum channel for BQC, which we call DFS-BQC. We propose three variations of DFS-BQC protocols. One of them, a coherent-light-assisted DFS-BQC protocol, allows Alice to faithfully send the signal photons with a probability proportional to a transmission rate of the quantum channel. In all cases, we combine the ideas based on DFS and the Broadbent-Fitzsimons-Kashefi protocol, which is one of the BQC protocols, without degrading unconditional security. The proposed DFS-based schemes are generic and hence can be applied to other BQC protocols where Alice sends quantum states to Bob.

  15. Solving strongly correlated electron models on a quantum computer

    NASA Astrophysics Data System (ADS)

    Wecker, Dave; Hastings, Matthew B.; Wiebe, Nathan; Clark, Bryan K.; Nayak, Chetan; Troyer, Matthias

    2015-12-01

    One of the main applications of future quantum computers will be the simulation of quantum models. While the evolution of a quantum state under a Hamiltonian is straightforward (if sometimes expensive), using quantum computers to determine the ground-state phase diagram of a quantum model and the properties of its phases is more involved. Using the Hubbard model as a prototypical example, we here show all the steps necessary to determine its phase diagram and ground-state properties on a quantum computer. In particular, we discuss strategies for efficiently determining and preparing the ground state of the Hubbard model starting from various mean-field states with broken symmetry. We present an efficient procedure to prepare arbitrary Slater determinants as initial states and present the complete set of quantum circuits needed to evolve from these to the ground state of the Hubbard model. We show that, using efficient nesting of the various terms, each time step in the evolution can be performed with just O (N ) gates and O (logN ) circuit depth. We give explicit circuits to measure arbitrary local observables and static and dynamic correlation functions, in both the time and the frequency domains. We further present efficient nondestructive approaches to measurement that avoid the need to reprepare the ground state after each measurement and that quadratically reduce the measurement error.

  16. Milestones toward Majorana-based quantum computing

    NASA Astrophysics Data System (ADS)

    Alicea, Jason

    Experiments on nanowire-based Majorana platforms now appear poised to move beyond the preliminary problem of zero-mode detection and towards loftier goals of realizing non-Abelian statistics and quantum information applications. Using an approach that synthesizes recent materials growth breakthroughs with tools long successfully deployed in quantum-dot research, I will outline a number of relatively modest milestones that progressively bridge the gap between the current state of the art and these grand longer-term challenges. The intermediate Majorana experiments surveyed in this talk should be broadly adaptable to other approaches as well. Supported by the National Science Foundation (DMR-1341822), Institute for Quantum Information and Matter, and Walter Burke Institute at Caltech.

  17. A pseudo-spin surface-acoustic-wave quantum computer.

    PubMed

    Barnes, C H W

    2003-07-15

    A modification to the surface-acoustic-wave quantum computer is described. The use of pseudo-spin qubits is introduced as a way to simplify the fabrication and programming of the computer. A form of optical readout that relies on the electrons in each surface-acoustic-wave minimum recombining with holes in a two-dimensional hole gas is suggested as a means to measure the output. The suggested modification would allow the quantum computer to be made smaller and to operate faster. PMID:12869323

  18. Quantum reactive scattering on innovative computing platforms

    NASA Astrophysics Data System (ADS)

    Pacifici, Leonardo; Nalli, Danilo; Laganà, Antonio

    2013-05-01

    The possibility of implementing quantum reactive scattering programs on cheap platforms, originally used for graphic purposes only, has been investigated using a NVIDIA GPU. After a conversion of the code considered from Fortran to C and its deep restructuring for exploiting the GPU key features, significant speedups have been obtained for RWAVEPR, a time dependent quantum reactive scattering code propagating in time a complex wavepacket. As benchmark calculations those concerned with the evaluation of the reactive probabilities of the Cl+H2 and the N+N2 reactions have been considered.

  19. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  20. Entanglement-Based Machine Learning on a Quantum Computer

    NASA Astrophysics Data System (ADS)

    Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.

    2015-03-01

    Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.

  1. Entanglement-based machine learning on a quantum computer.

    PubMed

    Cai, X-D; Wu, D; Su, Z-E; Chen, M-C; Wang, X-L; Li, Li; Liu, N-L; Lu, C-Y; Pan, J-W

    2015-03-20

    Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning. PMID:25839250

  2. An efficient FPGA architecture for integer ƞth root computation

    NASA Astrophysics Data System (ADS)

    Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose

    2015-10-01

    In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.

  3. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    PubMed Central

    Goto, Hayato

    2016-01-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997

  4. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    NASA Astrophysics Data System (ADS)

    Goto, Hayato

    2016-02-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  5. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.

    PubMed

    Goto, Hayato

    2016-01-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997

  6. Memory intensive functional architecture for distributed computer control systems

    SciTech Connect

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector.

  7. On the 'principle of the quantumness', the quantumness of Relativity, and the computational grand-unification

    SciTech Connect

    D'Ariano, Giacomo Mauro

    2010-05-04

    I will argue that the proposal of establishing operational foundations of Quantum Theory should have top-priority, and that the Lucien Hardy's program on Quantum Gravity should be paralleled by an analogous program on Quantum Field Theory (QFT), which needs to be reformulated, notwithstanding its experimental success. In this paper, after reviewing recently suggested operational 'principles of the quantumness', I address the problem on whether Quantum Theory and Special Relativity are unrelated theories, or instead, if the one implies the other. I show how Special Relativity can be indeed derived from causality of Quantum Theory, within the computational paradigm 'the universe is a huge quantum computer', reformulating QFT as a Quantum-Computational Field Theory (QCFT). In QCFT Special Relativity emerges from the fabric of the computational network, which also naturally embeds gauge invariance. In this scheme even the quantization rule and the Planck constant can in principle be derived as emergent from the underlying causal tapestry of space-time. In this way Quantum Theory remains the only theory operating the huge computer of the universe.Is the computational paradigm only a speculative tautology (theory as simulation of reality), or does it have a scientific value? The answer will come from Occam's razor, depending on the mathematical simplicity of QCFT. Here I will just start scratching the surface of QCFT, analyzing simple field theories, including Dirac's. The number of problems and unmotivated recipes that plague QFT strongly motivates us to undertake the QCFT project, since QCFT makes all such problems manifest, and forces a re-foundation of QFT.

  8. On the ``principle of the quantumness,'' the quantumness of Relativity, and the computational grand-unification

    NASA Astrophysics Data System (ADS)

    D'Ariano, Giacomo Mauro

    2010-05-01

    I will argue that the proposal of establishing operational foundations of Quantum Theory should have top-priority, and that the Lucien Hardy's program on Quantum Gravity should be paralleled by an analogous program on Quantum Field Theory (QFT), which needs to be reformulated, notwithstanding its experimental success. In this paper, after reviewing recently suggested operational "principles of the quantumness," I address the problem on whether Quantum Theory and Special Relativity are unrelated theories, or instead, if the one implies the other. I show how Special Relativity can be indeed derived from causality of Quantum Theory, within the computational paradigm "the universe is a huge quantum computer," reformulating QFT as a Quantum-Computational Field Theory (QCFT). In QCFT Special Relativity emerges from the fabric of the computational network, which also naturally embeds gauge invariance. In this scheme even the quantization rule and the Planck constant can in principle be derived as emergent from the underlying causal tapestry of space-time. In this way Quantum Theory remains the only theory operating the huge computer of the universe. Is the computational paradigm only a speculative tautology (theory as simulation of reality), or does it have a scientific value? The answer will come from Occam's razor, depending on the mathematical simplicity of QCFT. Here I will just start scratching the surface of QCFT, analyzing simple field theories, including Dirac's. The number of problems and unmotivated recipes that plague QFT strongly motivates us to undertake the QCFT project, since QCFT makes all such problems manifest, and forces a re-foundation of QFT.

  9. Architectural issues in fault-tolerant, secure computing systems

    SciTech Connect

    Joseph, M.K.

    1988-01-01

    This dissertation explores several facets of the applicability of fault-tolerance techniques to secure computer design, these being: (1) how fault-tolerance techniques can be used on unsolved problems in computer security (e.g., computer viruses, and denial-of-service); (2) how fault-tolerance techniques can be used to support classical computer-security mechanisms in the presence of accidental and deliberate faults; and (3) the problems involved in designing a fault-tolerant, secure computer system (e.g., how computer security can degrade along with both the computational and fault-tolerance capabilities of a computer system). The approach taken in this research is almost as important as its results. It is different from current computer-security research in that a design paradigm for fault-tolerant computer design is used. This led to an extensive fault and error classification of many typical security threats. Throughout this work, a fault-tolerance perspective is taken. However, the author did not ignore basic computer-security technology. For some problems he investigated how to support and extend basic-security mechanism (e.g., trusted computing base), instead of trying to achieve the same result with purely fault-tolerance techniques.

  10. Computing Entanglement Entropy in Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Melko, Roger

    2012-02-01

    The scaling of entanglement entropy in quantum many-body wavefunctions is expected to be a fruitful resource for studying quantum phases and phase transitions in condensed matter. However, until the recent development of estimators for Renyi entropy in quantum Monte Carlo (QMC), we have been in the dark about the behaviour of entanglement in all but the simplest two-dimensional models. In this talk, I will outline the measurement techniques that allow access to the Renyi entropies in several different QMC methodologies. I will then discuss recent simulation results demonstrating the richness of entanglement scaling in 2D, including: the prevalence of the ``area law''; topological entanglement entropy in a gapped spin liquid; anomalous subleading logarithmic terms due to Goldstone modes; universal scaling at critical points; and examples of emergent conformal-like scaling in several gapless wavefunctions. Finally, I will explore the idea that ``long range entanglement'' may complement the notion of ``long range order'' for quantum phases and phase transitions which lack a conventional order parameter description.

  11. Quantum computation in the analysis of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil

    2004-08-01

    Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.

  12. Could one make a diamond-based quantum computer?

    PubMed

    Stoneham, A Marshall; Harker, A H; Morley, Gavin W

    2009-09-01

    We assess routes to a diamond-based quantum computer, where we specifically look towards scalable devices, with at least 10 linked quantum gates. Such a computer should satisfy the deVincenzo rules and might be used at convenient temperatures. The specific examples that we examine are based on the optical control of electron spins. For some such devices, nuclear spins give additional advantages. Since there have already been demonstrations of basic initialization and readout, our emphasis is on routes to two-qubit quantum gate operations and the linking of perhaps 10-20 such gates. We analyse the dopant properties necessary, especially centres containing N and P, and give results using simple scoping calculations for the key interactions determining gate performance. Our conclusions are cautiously optimistic: it may be possible to develop a useful quantum information processor that works above cryogenic temperatures. PMID:21832328

  13. Combining dynamical decoupling with fault-tolerant quantum computation

    SciTech Connect

    Ng, Hui Khoon; Preskill, John; Lidar, Daniel A.

    2011-07-15

    We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprotected gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise and have a lower overhead cost than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer and can be expressed either in terms of the operator norm of the bath's Hamiltonian or in terms of the power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.

  14. Universal quantum computation with hybrid spin-Majorana qubits

    NASA Astrophysics Data System (ADS)

    Hoffman, Silas; Schrade, Constantin; Klinovaja, Jelena; Loss, Daniel

    2016-07-01

    We theoretically propose a set of universal quantum gates acting on a hybrid qubit formed by coupling a quantum-dot spin qubit and Majorana fermion qubit. First, we consider a quantum dot that is tunnel coupled to two topological superconductors. The effective spin-Majorana exchange facilitates a hybrid cnot gate for which either qubit can be the control or target. The second setup is a modular scalable network of topological superconductors and quantum dots. As a result of the exchange interaction between adjacent spin qubits, a cnot gate is implemented that acts on neighboring Majorana qubits and eliminates the necessity of interqubit braiding. In both setups, the spin-Majorana exchange interaction allows for a phase gate, acting on either the spin or the Majorana qubit, and for a swap or hybrid swap gate which is sufficient for universal quantum computation without projective measurements.

  15. Scheme for Entering Binary Data Into a Quantum Computer

    NASA Technical Reports Server (NTRS)

    Williams, Colin

    2005-01-01

    A quantum algorithm provides for the encoding of an exponentially large number of classical data bits by use of a smaller (polynomially large) number of quantum bits (qubits). The development of this algorithm was prompted by the need, heretofore not satisfied, for a means of entering real-world binary data into a quantum computer. The data format provided by this algorithm is suitable for subsequent ultrafast quantum processing of the entered data. Potential applications lie in disciplines (e.g., genomics) in which one needs to search for matches between parts of very long sequences of data. For example, the algorithm could be used to encode the N-bit-long human genome in only log2N qubits. The resulting log2N-qubit state could then be used for subsequent quantum data processing - for example, to perform rapid comparisons of sequences.

  16. Reference Architecture for High Dependability On-Board Computers

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Esper, Alexandre; Zandin, Johan; Barbosa, Ricardo; Monteleone, Claudio

    2014-08-01

    The industrial process in the area of on-board computers is characterized by small production series of on-board computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well-defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of on-board computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.

  17. Quantum algorithms for spin models and simulable gate sets for quantum computation

    NASA Astrophysics Data System (ADS)

    van den Nest, M.; Dür, W.; Raussendorf, R.; Briegel, H. J.

    2009-11-01

    We present simple mappings between classical lattice models and quantum circuits, which provide a systematic formalism to obtain quantum algorithms to approximate partition functions of lattice models in certain complex-parameter regimes. We, e.g., present an efficient quantum algorithm for the six-vertex model as well as a two-dimensional Ising-type model. We show that classically simulating these (complex-parameter) spin models is as hard as simulating universal quantum computation, i.e., BQP complete (BQP denotes bounded-error quantum polynomial time). Furthermore, our mappings provide a framework to obtain efficiently simulable quantum gate sets from exactly solvable classical models. We, e.g., show that the simulability of Valiant’s match gates can be recovered by using the solvability of the free-fermion eight-vertex model.

  18. An Invitation to the Mathematics of Topological Quantum Computation

    NASA Astrophysics Data System (ADS)

    Rowell, E. C.

    2016-03-01

    Two-dimensional topological states of matter offer a route to quantum computation that would be topologically protected against the nemesis of the quantum circuit model: decoherence. Research groups in industry, government and academic institutions are pursuing this approach. We give a mathematician's perspective on some of the advantages and challenges of this model, highlighting some recent advances. We then give a short description of how we might extend the theory to three-dimensional materials.

  19. A computationally efficient particle-simulation method suited to vector-computer architectures

    SciTech Connect

    McDonald, J.D.

    1990-01-01

    Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.

  20. Computer studies of multiple-quantum spin dynamics

    SciTech Connect

    Murdoch, J.B.

    1982-11-01

    The excitation and detection of multiple-quantum (MQ) transitions in Fourier transform NMR spectroscopy is an interesting problem in the quantum mechanical dynamics of spin systems as well as an important new technique for investigation of molecular structure. In particular, multiple-quantum spectroscopy can be used to simplify overly complex spectra or to separate the various interactions between a nucleus and its environment. The emphasis of this work is on computer simulation of spin-system evolution to better relate theory and experiment.

  1. Symmetry-protected topologically ordered states for universal quantum computation

    NASA Astrophysics Data System (ADS)

    Poulsen Nautrup, Hendrik; Wei, Tzu-Chieh

    Measurement-based quantum computation (MBQC) is a model for quantum information processing utilizing only local measurements on suitably entangled resource states for the implementation of quantum gates. A complete characterization for universal resource states is still missing. It has been shown that symmetry-protected topological order (SPTO) in one dimension can be exploited for the protection of certain quantum gates in MBQC. Here we investigate whether any 2D nontrivial SPTO states can serve as resource for MBQC. In particular, we show that the nontrivial SPTO ground state of the CZX model on the square lattice by Chen et al. [Phys. Rev. B 84, 235141 (2011)] can be reduced to a 2D cluster state by local measurement, hence a universal resource state. Such ground states have been generalized to qudits with symmetry action described by three cocycles of a finite group G of order d and shown to exhibit nontrivial SPTO. We also extend these to arbitary lattices and show that the generalized two-dimensional plaquette states on arbitrary lattices exhibit nontrivial SPTO in terms of symmetry fractionalization and that they are universal resource states for quantum computation. SPTO states therefore can provide a new playground for measurement-based quantum computation. This work was supported in part by the National Science Foundation.

  2. Nanoscale phosphorous atom arrays created using STM for the fabricaton of a silicon-based quantum computer

    NASA Astrophysics Data System (ADS)

    O'Brien, J. L.; Schofield, S. R.; Simmons, M. Y.; Clark, Robert G.; Dzurak, Andrew S.; Curson, N. J.; Kane, Bruce E.; McAlpine, N. S.; Hawley, Marilyn E.; Brown, Geoffrey W.

    2001-11-01

    Quantum computers offer the promise of formidable computational power for certain tasks. Of the various possible physical implementations of such a device, silicon based architectures are attractive for their scalability and ease of integration with existing silicon technology. These designs use either the electron or nuclear spin state of single donor atoms to store quantum information. Here we describe a strategy to fabricate an array of single phosphorus atoms in silicon for the construction of such a silicon based quantum computer. We demonstrate the controlled placement of single phosphorus bearing molecules on a silicon surface. This has been achieved by patterning a hydrogen mono-layer resist with a scanning tunneling microscope (STM) tip and exposing the patterned surface to phosphine (PH3) molecules. We also describe preliminary studies into a process to incorporate these surface phosphorus atoms into the silicon crystal at the array sites.

  3. On-Board Computing Subsystem for MIRAX: Architectural and Interface Aspects

    SciTech Connect

    Santiago, Valdivino

    2006-06-09

    This paper presents some proposals of architecture and interfaces among the different types of processing units of MIRAX on-board computing subsystem. MIRAX satellite payload is composed of dedicated computers, two Hard X-Ray cameras and one Soft X-Ray camera (WFC flight spare unit from BeppoSAX satellite). The architectures for the On-Board Computing Subsystem will take into account hardware or software solution of the event preprocessing for CdZnTe detectors. Hardware and software interfaces approaches will be shown and also requirements of on-board memory storage and telemetry will be addressed.

  4. Architectural concepts and redundancy techniques in fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1974-01-01

    This paper presents a description of redundancy techniques employed in the design of fault-tolerant computers, and a discussion of the effects of functional requirements, technology constraints, and cost considerations which enter into the choice of these techniques. The STAR computer, developed at the Jet Propulsion Laboratory for long-duration planetary spacecraft missions, is discussed along with several later fault-tolerant computer designs. The class of computers described in this paper employs dynamic redundancy, i.e., the machine is divided into a set of submodules, each with standby spares; a special hard core monitor unit detects and diagnoses faults, and effects automated recovery by replacing failed parts.

  5. Opportunities for X-ray Science in Future Computing Architectures

    SciTech Connect

    Foster, Ian

    2011-02-09

    The world of computing continues to evolve rapidly. In just the past 10 years, we have seen the emergence of petascale supercomputing, cloud computing that provides on-demand computing and storage with considerable economies of scale, software-as-a-service methods that permit outsourcing of complex processes, and grid computing that enables federation of resources across institutional boundaries. These trends show no sign of slowing down. The next 10 years will surely see exascale, new cloud offerings, and other terabit networks. This talk reviews various of these developments and discusses their potential implications for x-ray science and x-ray facilities.

  6. A Survey and Evaluation of Simulators Suitable for Teaching Courses in Computer Architecture and Organization

    ERIC Educational Resources Information Center

    Nikolic, B.; Radivojevic, Z.; Djordjevic, J.; Milutinovic, V.

    2009-01-01

    Courses in Computer Architecture and Organization are regularly included in Computer Engineering curricula. These courses are usually organized in such a way that students obtain not only a purely theoretical experience, but also a practical understanding of the topics lectured. This practical work is usually done in a laboratory using simulators…

  7. From Archi Torture to Architecture: Undergraduate Students Design and Implement Computers Using the Multimedia Logic Emulator

    ERIC Educational Resources Information Center

    Stanley, Timothy D.; Wong, Lap Kei; Prigmore, Daniel; Benson, Justin; Fishler, Nathan; Fife, Leslie; Colton, Don

    2007-01-01

    Students learn better when they both hear and do. In computer architecture courses "doing" can be difficult in small schools without hardware laboratories hosted by computer engineering, electrical engineering, or similar departments. Software solutions exist. Our success with George Mills' Multimedia Logic (MML) is the focus of this paper. MML…

  8. A Project-Based Learning Approach to Programmable Logic Design and Computer Architecture

    ERIC Educational Resources Information Center

    Kellett, C. M.

    2012-01-01

    This paper describes a course in programmable logic design and computer architecture as it is taught at the University of Newcastle, Australia. The course is designed around a major design project and has two supplemental assessment tasks that are also described. The context of the Computer Engineering degree program within which the course is…

  9. Using graph states for quantum computation and communication

    NASA Astrophysics Data System (ADS)

    Goyal, Kovid

    In this work, we describe a method to achieve fault tolerant measurement based quantum computation in two and three dimensions. The proposed scheme has an threshold of 7.8*10^-3 and poly-logarithmic overhead scaling. The overhead scaling below the threshold is also studied. The scheme uses a combination of topological error correction and magic state distillation to construct a universal quantum computer on a qubit lattice. The chapters on measurement based quantum computation are written in review form with extensive discussion and illustrative examples.In addition, we describe and analyze a family of entanglement purification protocols that provide a flexible trade-off between overhead, threshold and output quality. The protocols are studied analytically, with closed form expressions for their threshold.

  10. Degree of quantum correlation required to speed up a computation

    NASA Astrophysics Data System (ADS)

    Kay, Alastair

    2015-12-01

    The one-clean-qubit model of quantum computation (DQC1) efficiently implements a computational task that is not known to have a classical alternative. During the computation, there is never more than a small but finite amount of entanglement present, and it is typically vanishingly small in the system size. In this paper, we demonstrate that there is nothing unexpected hidden within the DQC1 model—Grover's search, when acting on a mixed state, provably exhibits a speedup over classical, with guarantees as to the presence of only vanishingly small amounts of quantum correlations (entanglement and quantum discord)—while arguing that this is not an artifact of the oracle-based construction. We also present some important refinements in the evaluation of how much entanglement may be present in the DQC1 and how the typical entanglement of the system must be evaluated.

  11. Time-Dependent Density Functional Theory for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Tempel, David

    2015-03-01

    In this talk, I will discuss how the theorems of TDDFT can be applied to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, I will discuss how TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions.

  12. Quantum memristors

    NASA Astrophysics Data System (ADS)

    Pfeiffer, P.; Egusquiza, I. L.; di Ventra, M.; Sanz, M.; Solano, E.

    2016-07-01

    Technology based on memristors, resistors with memory whose resistance depends on the history of the crossing charges, has lately enhanced the classical paradigm of computation with neuromorphic architectures. However, in contrast to the known quantized models of passive circuit elements, such as inductors, capacitors or resistors, the design and realization of a quantum memristor is still missing. Here, we introduce the concept of a quantum memristor as a quantum dissipative device, whose decoherence mechanism is controlled by a continuous-measurement feedback scheme, which accounts for the memory. Indeed, we provide numerical simulations showing that memory effects actually persist in the quantum regime. Our quantization method, specifically designed for superconducting circuits, may be extended to other quantum platforms, allowing for memristor-type constructions in different quantum technologies. The proposed quantum memristor is then a building block for neuromorphic quantum computation and quantum simulations of non-Markovian systems.

  13. Quantum memristors.

    PubMed

    Pfeiffer, P; Egusquiza, I L; Di Ventra, M; Sanz, M; Solano, E

    2016-01-01

    Technology based on memristors, resistors with memory whose resistance depends on the history of the crossing charges, has lately enhanced the classical paradigm of computation with neuromorphic architectures. However, in contrast to the known quantized models of passive circuit elements, such as inductors, capacitors or resistors, the design and realization of a quantum memristor is still missing. Here, we introduce the concept of a quantum memristor as a quantum dissipative device, whose decoherence mechanism is controlled by a continuous-measurement feedback scheme, which accounts for the memory. Indeed, we provide numerical simulations showing that memory effects actually persist in the quantum regime. Our quantization method, specifically designed for superconducting circuits, may be extended to other quantum platforms, allowing for memristor-type constructions in different quantum technologies. The proposed quantum memristor is then a building block for neuromorphic quantum computation and quantum simulations of non-Markovian systems. PMID:27381511

  14. Quantum memristors

    PubMed Central

    Pfeiffer, P.; Egusquiza, I. L.; Di Ventra, M.; Sanz, M.; Solano, E.

    2016-01-01

    Technology based on memristors, resistors with memory whose resistance depends on the history of the crossing charges, has lately enhanced the classical paradigm of computation with neuromorphic architectures. However, in contrast to the known quantized models of passive circuit elements, such as inductors, capacitors or resistors, the design and realization of a quantum memristor is still missing. Here, we introduce the concept of a quantum memristor as a quantum dissipative device, whose decoherence mechanism is controlled by a continuous-measurement feedback scheme, which accounts for the memory. Indeed, we provide numerical simulations showing that memory effects actually persist in the quantum regime. Our quantization method, specifically designed for superconducting circuits, may be extended to other quantum platforms, allowing for memristor-type constructions in different quantum technologies. The proposed quantum memristor is then a building block for neuromorphic quantum computation and quantum simulations of non-Markovian systems. PMID:27381511

  15. Utilizing photon number parity measurements to demonstrate quantum computation with cat-states in a cavity

    NASA Astrophysics Data System (ADS)

    Petrenko, A.; Ofek, N.; Vlastakis, B.; Sun, L.; Leghtas, Z.; Heeres, R.; Sliwa, K. M.; Mirrahimi, M.; Jiang, L.; Devoret, M. H.; Schoelkopf, R. J.

    2015-03-01

    Realizing a working quantum computer requires overcoming the many challenges that come with coupling large numbers of qubits to perform logical operations. These include improving coherence times, achieving high gate fidelities, and correcting for the inevitable errors that will occur throughout the duration of an algorithm. While impressive progress has been made in all of these areas, the difficulty of combining these ingredients to demonstrate an error-protected logical qubit, comprised of many physical qubits, still remains formidable. With its large Hilbert space, superior coherence properties, and single dominant error channel (single photon loss), a superconducting 3D resonator acting as a resource for a quantum memory offers a hardware-efficient alternative to multi-qubit codes [Leghtas et.al. PRL 2013]. Here we build upon recent work on cat-state encoding [Vlastakis et.al. Science 2013] and photon-parity jumps [Sun et.al. 2014] by exploring the effects of sequential measurements on a cavity state. Employing a transmon qubit dispersively coupled to two superconducting resonators in a cQED architecture, we explore further the application of parity measurements to characterizing such a hybrid qubit/cat state architecture. In so doing, we demonstrate the promise of integrating cat states as central constituents of future quantum codes.

  16. Closed timelike curves in measurement-based quantum computation

    SciTech Connect

    Dias da Silva, Raphael; Galvao, Ernesto F.; Kashefi, Elham

    2011-01-15

    Many results have been recently obtained regarding the power of hypothetical closed timelike curves (CTCs) in quantum computation. Here we show that the one-way model of measurement-based quantum computation encompasses in a natural way the CTC model proposed by Bennett, Schumacher, and Svetlichny. We identify a class of CTCs in this model that can be simulated deterministically and point to a fundamental limitation of Deutsch's CTC model which leads to predictions conflicting with those of the one-way model.

  17. Efficient computations of quantum canonical Gibbs state in phase space

    NASA Astrophysics Data System (ADS)

    Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.

    2016-06-01

    The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.

  18. A learnable parallel processing architecture towards unity of memory and computing

    PubMed Central

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-01-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. PMID:26271243

  19. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  20. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-01-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. PMID:26271243

  1. Quantum annealing: The fastest route to quantum computation?

    NASA Astrophysics Data System (ADS)

    Smorra, C.; Blaum, K.; Bojtar, L.; Borchert, M.; Franke, K. A.; Higuchi, T.; Leefer, N.; Nagahama, H.; Matsuda, Y.; Mooser, A.; Niemann, M.; Ospelkaus, C.; Quint, W.; Schneider, G.; Sellner, S.; Tanaka, T.; Van Gorp, S.; Walz, J.; Yamazaki, Y.; Ulmer, S.

    2015-11-01

    The Baryon Antibaryon Symmetry Experiment (BASE) aims at performing a stringent test of the combined charge parity and time reversal (CPT) symmetry by comparing the magnetic moments of the proton and the antiproton with high precision. Using single particles in a Penning trap, the proton/antiproton g-factors, i.e. the magnetic moment in units of the nuclear magneton, are determined by measuring the respective ratio of the spin-precession frequency to the cyclotron frequency. The spin precession frequency is measured by non-destructive detection of spin quantum transitions using the continuous Stern-Gerlach effect, and the cyclotron frequency is determined from the particle*s motional eigenfrequencies in the Penning trap using the invariance theorem. By application of the double Penning-trap method we expect that in our measurements a fractional precision of δ g/ g 10-9 can be achieved. The successful application of this method to the antiproton will consist a factor 1000 improvement in the fractional precision of its magnetic moment. The BASE collaboration has constructed and commissioned a new experiment at the Antiproton Decelerator (AD) of CERN. This article describes and summarizes the physical and technical aspects of this new experiment.

  2. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  3. Arranging computer architectures to create higher-performance controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1988-01-01

    Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.

  4. Computational nuclear quantum many-body problem: The UNEDF project

    SciTech Connect

    Fann, George I

    2013-01-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  5. Bound on quantum computation time: Quantum error correction in a critical environment

    SciTech Connect

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-08-15

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  6. Investigations in quantum computing: Causality and graph isomorphism

    NASA Astrophysics Data System (ADS)

    Beckman, David Eugene

    In this thesis I explore two different types of limits on the time complexity of quantum computation---that is, limits on how much time is required to perform a given class of quantum operations on a quantum system. Upper limits can be found by explicit construction; I explore this approach for the problem of determining whether two graphs are isomorphic. Finding lower limits, on the other hand, usually requires appeal to some fundamental principle of the operation under consideration; I use this approach to derive lower limits placed by the requirements of relativistic causality on the time required for implementation of some nonlocal quantum operations. In some situations these limits are attainable, but for other physical spacetime geometries we exhibit classes of operations which do not violate relativistic causality but which are nevertheless not implementable.

  7. Computer-Aided Design of Organic Host Architectures for Selective Chemosensors

    SciTech Connect

    Hay, Benjamin; Bryantsev, Vyacheslav S.

    2009-01-01

    Selective organic hosts provide the foundation for the development of many types of sensors. The deliberate design of host molecules with predetermined selectivity, however, remains a challenge in supramolecular chemistry. To address this issue we have developed a de novo structure-based design approach for the unbiased construction of complementary host architectures. This chapter summarizes recent progress including improvements on a computer software program, HostDesigner, specifically tailored to discover host architectures for small guest molecules. HostDesigner is capable of generating and evaluating millions of candidate structures in minutes on a desktop personal computer, allowing a user to rapidly identify three-dimensional architectures that are structurally organized for binding a targeted guest species. The efficacy of this computational methodology is illustrated with a search for cation hosts containing aliphatic ether oxygen groups and anion hosts containing urea groups.

  8. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  9. Resource efficient hardware architecture for fast computation of running max/min filters.

    PubMed

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k(2) - 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  10. Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Morimae, Tomoyuki

    2015-11-01

    We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.

  11. Scalable neutral atom quantum computing with MEMS micromirrors

    NASA Astrophysics Data System (ADS)

    Knoernschild, Caleb; Lu, Felix; Ryu, Hoon; Feng, Michael; Kim, Jungsang

    2010-03-01

    In order to realize a useful atom-based quantum computer, a means to efficiently distribute critical laser resources to multiple trap locations is essential. Optical micro-electromechanical systems (MEMS) can provide the scalability, flexibility, and stability needed to help bridge the gap between fundamental demonstrations of quantum gates to large scale quantum computing of multiple qubits. Using controllable, broadband micromirrors, an arbitrary atom in a 1, 2, or 3 dimensional optical lattice can be addressed with a single laser source. It is straightforward to scale this base system to address n arbitrary set of atoms simultaneously using n laser sources. We explore on-demand addressability of individual atoms trapped in a 1D lattice, as well as investigate the effect the micromirrors have on the laser beam quality and phase stability.

  12. Reducing the overhead for quantum computation when noise is biased

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.; Poulin, David

    2015-12-01

    We analyze a model for fault-tolerant quantum computation with low overhead suitable for situations where the noise is biased. The basis for this scheme is a gadget for the fault-tolerant preparation of magic states that enable universal fault-tolerant quantum computation using only Clifford gates that preserve the noise bias. We analyze the distillation of |T > -type magic states using this gadget at the physical level, followed by concatenation with the 15-qubit quantum Reed-Muller code, and comparing our results with standard constructions. In the regime where the noise bias (rate of Pauli Z errors relative to other single-qubit errors) is greater than a factor of 10, our scheme has lower overhead across a broad range of relevant noise rates.

  13. Indications for quantum computation requirements from comparative brain analysis

    NASA Astrophysics Data System (ADS)

    Bernroider, Gustav; Baer, Wolfgang

    2010-04-01

    Whether or not neuronal signal properties can engage 'non-trivial', i.e. functionally significant, quantum properties, is the subject of an ongoing debate. Here we provide evidence that quantum coherence dynamics can play a functional role in ion conduction mechanism with consequences on the shape and associative character of classical membrane signals. In particular, these new perspectives predict that a specific neuronal topology (e.g. the connectivity pattern of cortical columns in the primate brain) is less important and not really required to explain abilities in perception and sensory-motor integration. Instead, this evidence is suggestive for a decisive role of the number and functional segregation of ion channel proteins that can be engaged in a particular neuronal constellation. We provide evidence from comparative brain studies and estimates of computational capacity behind visual flight functions suggestive for a possible role of quantum computation in biological systems.

  14. Optical quantum computation with cavities in the intermediate coupling region

    NASA Astrophysics Data System (ADS)

    Mei, F.; Yu, Y. F.; Feng, X. L.; Zhu, S. L.; Zhang, Z. M.

    2010-07-01

    Large-scale quantum computation is currently a hot area of research. The scalable quantum computation scheme with cavities originally proposed by Duan and Kimble (Phys. Rev. Lett., 92 (2004) 127902) is further developed here to operate in the intermediate coupling region, which not only greatly relaxes experimental demands on the Purcell factor, but also eliminates the need to consider internal trade-off between cavity quality and efficiency. In our scheme, by controlling the reflectivity of the input single-photon pulse in the cavity, we can realize local atom-photon and nonlocal atom-atom controlled phase-flip (CPF) gates. We also introduce a theoretical model to analyze the performance of our scheme under practical noise. Furthermore, we show that the nonlocal CPF gate can be used to realize a quantum repeater.

  15. Optimized entanglement purification schemes for modular based quantum computers

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    The choice of entanglement purification scheme strongly depends on the fidelities of quantum gates and measurements, as well as the imperfection of initial entanglement. For instance, the purification scheme optimal at low gate fidelities may not necessarily be the optimal scheme at higher gate fidelities. We employ an evolutionary algorithm that efficiently optimizes the entanglement purification circuit for given system parameters. Such optimized purification schemes will boost the performance of entanglement purification, and consequently enhance the fidelity of teleportation-based non-local coupling gates, which is an indispensible building block for modular-based quantum computers. In addition, we study how these optimized purification schemes affect the resource overhead caused by error correction in modular based quantum computers.

  16. A Survey of Architectural Techniques for Near-Threshold Computing

    DOE PAGESBeta

    Mittal, Sparsh

    2015-12-28

    Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlightmore » their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.« less

  17. A Survey of Architectural Techniques for Near-Threshold Computing

    SciTech Connect

    Mittal, Sparsh

    2015-12-28

    Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlight their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.

  18. Final Report: Super Instruction Architecture for Scalable Parallel Computations

    SciTech Connect

    Sanders, Beverly Ann; Bartlett, Rodney; Deumens, Erik

    2013-12-23

    The most advanced methods for reliable and accurate computation of the electronic structure of molecular and nano systems are the coupled-cluster techniques. These high-accuracy methods help us to understand, for example, how biological enzymes operate and contribute to the design of new organic explosives. The ACES III software provides a modern, high-performance implementation of these methods optimized for high performance parallel computer systems, ranging from small clusters typical in individual research groups, through larger clusters available in campus and regional computer centers, all the way to high-end petascale systems at national labs, including exploiting GPUs if available. This project enhanced the ACESIII software package and used it to study interesting scientific problems.

  19. A language comparison for scientific computing on MIMD architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.

    1989-01-01

    Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.

  20. Adapting the traveling salesman problem to an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Warren, Richard H.

    2013-04-01

    We show how to guide a quantum computer to select an optimal tour for the traveling salesman. This is significant because it opens a rapid solution method for the wide range of applications of the traveling salesman problem, which include vehicle routing, job sequencing and data clustering.

  1. A multitasking finite state architecture for computer control of an electric powertrain

    SciTech Connect

    Burba, J.C.

    1984-01-01

    Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexities of excitation variable sampling in this environment are also considered.

  2. A comparison of computer architectures for the NASA demonstration advanced avionics system

    NASA Technical Reports Server (NTRS)

    Seacord, C. L.; Bailey, D. G.; Larson, J. C.

    1979-01-01

    The paper compares computer architectures for the NASA demonstration advanced avionics system. Two computer architectures are described with an unusual approach to fault tolerance: a single spare processor can correct for faults in any of the distributed processors by taking on the role of a failed module. It was shown the system must be used from a functional point of view to properly apply redundancy and achieve fault tolerance and ultra reliability. Data are presented on complexity and mission failure probability which show that the revised version offers equivalent mission reliability at lower cost as measured by hardware and software complexity.

  3. Evaluating charge noise acting on semiconductor quantum dots in the circuit quantum electrodynamics architecture

    SciTech Connect

    Basset, J.; Stockklauser, A.; Jarausch, D.-D.; Frey, T.; Reichl, C.; Wegscheider, W.; Wallraff, A.; Ensslin, K.; Ihn, T.

    2014-08-11

    We evaluate the charge noise acting on a GaAs/GaAlAs based semiconductor double quantum dot dipole-coupled to the voltage oscillations of a superconducting transmission line resonator. The in-phase (I) and the quadrature (Q) components of the microwave tone transmitted through the resonator are sensitive to charging events in the surrounding environment of the double dot with an optimum sensitivity of 8.5×10{sup −5} e/√(Hz). A low frequency 1/f type noise spectrum combined with a white noise level of 6.6×10{sup −6} e{sup 2}/Hz above 1 Hz is extracted, consistent with previous results obtained with quantum point contact charge detectors on similar heterostructures. The slope of the 1/f noise allows to extract a lower bound for the double-dot charge qubit dephasing rate which we compare to the one extracted from a Jaynes-Cummings Hamiltonian approach. The two rates are found to be similar emphasizing that charge noise is the main source of dephasing in our system.

  4. An Architectural Design System Based on Computer Graphics.

    ERIC Educational Resources Information Center

    MacDonald, Stephen L.; Wehrli, Robert

    The recent developments in computer hardware and software are presented to inform architects of this design tool. Technical advancements in equipment include--(1) cathode ray tube displays, (2) light pens, (3) print-out and photo copying attachments, (4) controls for comparison and selection of images, (5) chording keyboards, (6) plotters, and (7)…

  5. COMPUTER ARCHITECTURE FOR RESEARCH IN METEOROLOGY AND ATMOSPHERIC CHEMISTRY

    EPA Science Inventory

    The study examines the feasibility of constructing a peripheral hardware module that could be attached to a mini or midsized computer to accelerate the execution of large air pollution models, such as the EPA's Regional Oxidant Model (ROM). Crucial information necessary to design...

  6. CSP: A Multifaceted Hybrid Architecture for Space Computing

    NASA Technical Reports Server (NTRS)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  7. A fast algorithm for parallel computation of multibody dynamics on MIMD parallel architectures

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory; Bagherzadeh, Nader

    1993-01-01

    In this paper the implementation of a parallel O(LogN) algorithm for computation of rigid multibody dynamics on a Hypercube MIMD parallel architecture is presented. To our knowledge, this is the first algorithm that achieves the time lower bound of O(LogN) by using an optimal number of O(N) processors. However, in addition to its theoretical significance, the algorithm is also highly efficient for practical implementation on commercially available MIMD parallel architectures due to its highly coarse grain size and simple communication and synchronization requirements. We present a multilevel parallel computation strategy for implementation of the algorithm on a Hypercube. This strategy allows the exploitation of parallelism at several computational levels as well as maximum overlapping of computation and communication to increase the performance of parallel computation.

  8. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  9. Human-competitive evolution of quantum computing artefacts by Genetic Programming.

    PubMed

    Massey, Paul; Clark, John A; Stepney, Susan

    2006-01-01

    We show how Genetic Programming (GP) can be used to evolve useful quantum computing artefacts of increasing sophistication and usefulness: firstly specific quantum circuits, then quantum programs, and finally system-independent quantum algorithms. We conclude the paper by presenting a human-competitive Quantum Fourier Transform (QFT) algorithm evolved by GP. PMID:16536889

  10. Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2012-01-01

    A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an "intelligent perception" processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of "searching for a source of food while avoiding the predator." The architecture is particularly suited for scene analysis from visual data and odorant.

  11. Algorithmic and architectural optimizations for computationally efficient particle filtering.

    PubMed

    Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama

    2008-05-01

    In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378

  12. Computing and the electrical transport properties of coupled quantum networks

    NASA Astrophysics Data System (ADS)

    Cain, Casey Andrew

    In this dissertation a number of investigations were conducted on ballistic quantum networks in the mesoscopic range. In this regime, the wave nature of electron transport under the influence of transverse magnetic fields leads to interesting applications for digital logic and computing circuits. The work specifically looks at characterizing a few main areas that would be of interest to experimentalists who are working in nanostructure devices, and is organized as a series of papers. The first paper analyzes scaling relations and normal mode charge distributions for such circuits in both isolated and open (terminals attached) form. The second paper compares the flux-qubit nature of quantum networks to the well-established spintronics theory. The results found exactly contradict the conventional school of thought for what is required for quantum computation. The third paper investigates the requirements and limitations of extending the Thevenin theorem in classic electric circuits to ballistic quantum transport. The fourth paper outlines the optimal functionally complete set of quantum circuits that can completely satisfy all sixteen Boolean logic operations for two variables.

  13. Quantum memristors

    DOE PAGESBeta

    Pfeiffer, P.; Egusquiza, I. L.; Di Ventra, M.; Sanz, M.; Solano, E.

    2016-07-06

    Technology based on memristors, resistors with memory whose resistance depends on the history of the crossing charges, has lately enhanced the classical paradigm of computation with neuromorphic architectures. However, in contrast to the known quantized models of passive circuit elements, such as inductors, capacitors or resistors, the design and realization of a quantum memristor is still missing. Here, we introduce the concept of a quantum memristor as a quantum dissipative device, whose decoherence mechanism is controlled by a continuous-measurement feedback scheme, which accounts for the memory. Indeed, we provide numerical simulations showing that memory effects actually persist in the quantummore » regime. Our quantization method, specifically designed for superconducting circuits, may be extended to other quantum platforms, allowing for memristor-type constructions in different quantum technologies. As a result, the proposed quantum memristor is then a building block for neuromorphic quantum computation and quantum simulations of non-Markovian systems.« less

  14. Optimizing Quantum Simulation for Heterogeneous Computing: a Hadamard Transformation Study

    NASA Astrophysics Data System (ADS)

    de Avila, Anderson B.; Schumalfuss, Murilo F.; Reiser, Renata H. S.; Pilla, Mauricio L.; Maron, Adriano K.

    2015-10-01

    The D-GM execution environment improves distributed simulation of quantum algorithms in heterogeneous computing environments comprising both multi-core CPUs and GPUs. The main contribution of this work consists in the optimization of the environment VirD-GM, conceived in three steps: (i) the theoretical studies and implementation of the abstractions of the Mixed Partial Process defined in the qGM model, focusing on the reduction of the memory consumption regarding multidimensional QTs; (ii) the distributed/parallel implementation of such abstractions allowing its execution on clusters of GPUs; (iii) and optimizations that predict multiplications by zero-value of the quantum states/transformations, implying reduction in the number of computations. The results obtained in this work embrace the distribute/parallel simulation of Hadamard gates up to 21 qubits, showing scalability with the increase in the number of computing nodes.

  15. Computational complexity of nonequilibrium steady states of quantum spin chains

    NASA Astrophysics Data System (ADS)

    Marzolino, Ugo; Prosen, Tomaž

    2016-03-01

    We study nonequilibrium steady states (NESS) of spin chains with boundary Markovian dissipation from the computational complexity point of view. We focus on X X chains whose NESS are matrix product operators, i.e., with coefficients of a tensor operator basis described by transition amplitudes in an auxiliary space. Encoding quantum algorithms in the auxiliary space, we show that estimating expectations of operators, being local in the sense that each acts on disjoint sets of few spins covering all the system, provides the answers of problems at least as hard as, and believed by many computer scientists to be much harder than, those solved by quantum computers. We draw conclusions on the hardness of the above estimations.

  16. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  17. Information management architecture for an integrated computing environment for the Environmental Restoration Program. Environmental Restoration Program, Volume 3, Interim technical architecture

    SciTech Connect

    Not Available

    1994-09-01

    This third volume of the Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program--the Interim Technical Architecture (TA) (referred to throughout the remainder of this document as the ER TA)--represents a key milestone in establishing a coordinated information management environment in which information initiatives can be pursued with the confidence that redundancy and inconsistencies will be held to a minimum. This architecture is intended to be used as a reference by anyone whose responsibilities include the acquisition or development of information technology for use by the ER Program. The interim ER TA provides technical guidance at three levels. At the highest level, the technical architecture provides an overall computing philosophy or direction. At this level, the guidance does not address specific technologies or products but addresses more general concepts, such as the use of open systems, modular architectures, graphical user interfaces, and architecture-based development. At the next level, the technical architecture provides specific information technology recommendations regarding a wide variety of specific technologies. These technologies include computing hardware, operating systems, communications software, database management software, application development software, and personal productivity software, among others. These recommendations range from the adoption of specific industry or Martin Marietta Energy Systems, Inc. (Energy Systems) standards to the specification of individual products. At the third level, the architecture provides guidance regarding implementation strategies for the recommended technologies that can be applied to individual projects and to the ER Program as a whole.

  18. Effect of noise on geometric logic gates for quantum computation

    SciTech Connect

    Blais, A.; Tremblay, A.-M.S.

    2003-01-01

    We introduce the nonadiabatic, or Aharonov-Anandan, geometric phase as a tool for quantum computation and show how this phase on one qubit can be monitored by a second qubit without any dynamical contribution. We also discuss how this geometric phase could be implemented with superconducting charge qubits. While the nonadiabatic geometric phase may circumvent many of the drawbacks related to the adiabatic (Berry) version of geometric gates, we show that the effect of fluctuations of the control parameters on nonadiabatic phase gates is more severe than for the standard dynamic gates. Similarly, fluctuations also affect to a greater extent quantum gates that use the Berry phase instead of the dynamic phase.

  19. Universal topological quantum computation from a superconductor/Abelian quantum Hall heterostructure

    NASA Astrophysics Data System (ADS)

    Mong, Roger

    2014-03-01

    Non-Abelian anyons promise to reveal spectacular features of quantum mechanics that could ultimately provide the foundation for a decoherence-free quantum computer. A key breakthrough in the pursuit of these exotic particles originated from Read and Green's observation that the Moore-Read quantum Hall state and a (relatively simple) two-dimensional p + ip superconductor both support so-called Ising non-Abelian anyons. Here we establish a similar correspondence between the Z3 Read-Rezayi quantum Hall state and a novel two-dimensional superconductor in which charge- 2 e Cooper pairs are built from fractionalized quasiparticles. In particular, both phases harbor Fibonacci anyons that--unlike Ising anyons--allow for universal topological quantum computation solely through braiding. Using a variant of Teo and Kane's construction of non-Abelian phases from weakly coupled chains, we provide a blueprint for such a superconductor using Abelian quantum Hall states interlaced with an array of superconducting islands. These results imply that one can, in principle, combine well-understood and widely available phases of matter to realize non-Abelian anyons with universal braid statistics.

  20. Universal Topological Quantum Computation from a Superconductor-Abelian Quantum Hall Heterostructure

    NASA Astrophysics Data System (ADS)

    Mong, Roger S. K.; Clarke, David J.; Alicea, Jason; Lindner, Netanel H.; Fendley, Paul; Nayak, Chetan; Oreg, Yuval; Stern, Ady; Berg, Erez; Shtengel, Kirill; Fisher, Matthew P. A.

    2014-01-01

    Non-Abelian anyons promise to reveal spectacular features of quantum mechanics that could ultimately provide the foundation for a decoherence-free quantum computer. A key breakthrough in the pursuit of these exotic particles originated from Read and Green's observation that the Moore-Read quantum Hall state and a (relatively simple) two-dimensional p+ip superconductor both support so-called Ising non-Abelian anyons. Here, we establish a similar correspondence between the Z3 Read-Rezayi quantum Hall state and a novel two-dimensional superconductor in which charge-2e Cooper pairs are built from fractionalized quasiparticles. In particular, both phases harbor Fibonacci anyons that—unlike Ising anyons—allow for universal topological quantum computation solely through braiding. Using a variant of Teo and Kane's construction of non-Abelian phases from weakly coupled chains, we provide a blueprint for such a superconductor using Abelian quantum Hall states interlaced with an array of superconducting islands. Fibonacci anyons appear as neutral deconfined particles that lead to a twofold ground-state degeneracy on a torus. In contrast to a p+ip superconductor, vortices do not yield additional particle types, yet depending on nonuniversal energetics can serve as a trap for Fibonacci anyons. These results imply that one can, in principle, combine well-understood and widely available phases of matter to realize non-Abelian anyons with universal braid statistics. Numerous future directions are discussed, including speculations on alternative realizations with fewer experimental requirements.