Science.gov

Sample records for large-scale open quantum

  1. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05 quantum simulations, in particular, allow us to also study the role and impact of the nuclear symmetry energy on these pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  2. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    NASA Astrophysics Data System (ADS)

    Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.

    2009-06-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  3. Quantum gravity and the large scale anomaly

    SciTech Connect

    Kamenshchik, Alexander Y.; Tronconi, Alessandro; Venturi, Giovanni E-mail: Alessandro.Tronconi@bo.infn.it

    2015-04-01

    The spectrum of primordial perturbations obtained by calculating the quantum gravitational corrections to the dynamics of scalar perturbations is compared with Planck 2013 and BICEP2/Keck Array public data. The quantum gravitational effects are calculated in the context of a Wheeler-De Witt approach and have quite distinctive features. We constrain the free parameters of the theory by comparison with observations.

  4. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-10-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  5. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-07-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  6. Large-scale quantum networks based on graphs

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  7. Large-scale quantum photonic circuits in silicon

    NASA Astrophysics Data System (ADS)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards

  8. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  9. Large Scale Electronic Structure Calculations using Quantum Chemistry Methods

    NASA Astrophysics Data System (ADS)

    Scuseria, Gustavo E.

    1998-03-01

    This talk will address our recent efforts in developing fast, linear scaling electronic structure methods for large scale applications. Of special importance is our fast multipole method( M. C. Strain, G. E. Scuseria, and M. J. Frisch, Science 271), 51 (1996). (FMM) for achieving linear scaling for the quantum Coulomb problem (GvFMM), the traditional bottleneck in quantum chemistry calculations based on Gaussian orbitals. Fast quadratures(R. E. Stratmann, G. E. Scuseria, and M. J. Frisch, Chem. Phys. Lett. 257), 213 (1996). combined with methods that avoid the Hamiltonian diagonalization( J. M. Millam and G. E. Scuseria, J. Chem. Phys. 106), 5569 (1997) have resulted in density functional theory (DFT) programs that can be applied to systems containing many hundreds of atoms and ---depending on computational resources or level of theory-- to many thousands of atoms.( A. D. Daniels, J. M. Millam and G. E. Scuseria, J. Chem. Phys. 107), 425 (1997). Three solutions for the diagonalization bottleneck will be analyzed and compared: a conjugate gradient density matrix search (CGDMS), a Hamiltonian polynomial expansion of the density matrix, and a pseudo-diagonalization method. Besides DFT, our near-field exchange method( J. C. Burant, G. E. Scuseria, and M. J. Frisch, J. Chem. Phys. 105), 8969 (1996). for linear scaling Hartree-Fock calculations will be discussed. Based on these improved capabilities, we have also developed programs to obtain vibrational frequencies (via analytic energy second derivatives) and excitation energies (through time-dependent DFT) of large molecules like porphyn or C_70. Our GvFMM has been extended to periodic systems( K. N. Kudin and G. E. Scuseria, Chem. Phys. Lett., in press.) and progress towards a Gaussian-based DFT and HF program for polymers and solids will be reported. Last, we will discuss our progress on a Laplace-transformed \\cal O(N^2) second-order pertubation theory (MP2) method.

  10. Open TG-GATEs: a large-scale toxicogenomics database

    PubMed Central

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  11. Distribution of entanglement in large-scale quantum networks.

    PubMed

    Perseguers, S; Lapeyre, G J; Cavalcanti, D; Lewenstein, M; Acín, A

    2013-09-01

    The concentration and distribution of quantum entanglement is an essential ingredient in emerging quantum information technologies. Much theoretical and experimental effort has been expended in understanding how to distribute entanglement in one-dimensional networks. However, as experimental techniques in quantum communication develop, protocols for multi-dimensional systems become essential. Here, we focus on recent theoretical developments in protocols for distributing entanglement in regular and complex networks, with particular attention to percolation theory and network-based error correction.

  12. Green chemistry for large-scale synthesis of semiconductor quantum dots.

    PubMed

    Liu, Jin-Hua; Fan, Jun-Bing; Gu, Zheng; Cui, Jing; Xu, Xiao-Bo; Liang, Zhi-Wu; Luo, Sheng-Lian; Zhu, Ming-Qiang

    2008-05-20

    Large-scale synthesis of semiconductor nanocrystals or quantum dots (QDs) with high concentration and high yield through simultaneously increasing the precursor concentration was introduced. This synthetic route conducted in diesel has produced gram-scale CdSe semiconductor quantum dots (In optimal scale-up synthetic condition, the one-pot yield of QDs is up to 9.6g). The reaction has been conducted in open air and at relatively low temperature at 190-230 degrees C in the absence of expensive organic phosphine ligands, aliphatic amine and octadecene, which is really green chemistry without high energy cost for high temperature reaction and unessential toxic chemicals except for Cd, which is the essential building block for QDs. PMID:18399665

  13. Cryogenic Linear Ion Trap for Large-Scale Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Pagano, Guido; Hess, Paul; Kaplan, Harvey; Birckelbaw, Eric; Hernanez, Micah; Lee, Aaron; Smith, Jake; Zhang, Jiehang; Monroe, Christopher

    2016-05-01

    Ions confined in RF Paul traps are a useful tool for quantum simulation of long-range spin-spin interaction models. As the system size increases, classical simulation methods become incapable of modeling the exponentially growing Hilbert space, necessitating quantum simulation for precise predictions. Current experiments are limited to less than 30 qubits due to collisions with background gas that regularly destroys the ion crystal. We present progress toward the construction of a cryogenic ion trap apparatus, which uses differential cryopumping to reduce vacuum pressure to a level where collisions do not occur. This should allow robust trapping of about 100 ions/qubits in a single chain with long lifetimes. Such a long chain will provide a platform to investigate simultaneously cooling of various vibrational modes and will enable quantum simulations that outperform their classical counterpart. Our apparatus will provide a powerful test-bed to investigate a large variety of Hamiltonians, including spin 1 and spin 1/2 systems with Ising or XY interactions. This work is supported by the ARO Atomic Physics Program, the AFOSR MURI on Quantum Measurement and Verification, the IC Fellowship Program and the NSF Physics Frontier Center at JQI.

  14. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  15. Large scale quantum walks by means of optical fiber cavities

    NASA Astrophysics Data System (ADS)

    Boutari, J.; Feizpour, A.; Barz, S.; Di Franco, C.; Kim, M. S.; Kolthammer, W. S.; Walmsley, I. A.

    2016-09-01

    We demonstrate a platform for implementing quantum walks that overcomes many of the barriers associated with photonic implementations. We use coupled fiber-optic cavities to implement time-bin encoded walks in an integrated system. We show that this platform can achieve very low losses combined with high-fidelity operations, enabling an unprecedented large number of steps in a passive system, as required for scenarios with multiple walkers. Furthermore the platform is reconfigurable, enabling variation of the coin, and readily extends to multidimensional lattices. We demonstrate variation of the coin bias experimentally for three different values.

  16. Vision for single flux quantum very large scale integrated technology

    NASA Astrophysics Data System (ADS)

    Silver, Arnold; Bunyk, Paul; Kleinsasser, Alan; Spargo, John

    2006-05-01

    Single flux quantum (SFQ) electronics is extremely fast and has very low on-chip power dissipation. SFQ VLSI is an excellent candidate for high-performance computing and other applications requiring extremely high-speed signal processing. Despite this, SFQ technology has generally not been accepted for system implementation. We argue that this is due, at least in part, to the use of outdated tools to produce SFQ circuits and chips. Assuming the use of tools equivalent to those employed in the semiconductor industry, we estimate the density of Josephson junctions, circuit speed, and power dissipation that could be achieved with SFQ technology. Today, CMOS lithography is at 90-65 nm with about 20 layers. Assuming equivalent technology, aggressively increasing the current density above 100 kA cm-2 to achieve junction speeds approximately 1000 GHz, and reducing device footprints by converting device profiles from planar to vertical, one could expect to integrate about 250 M Josephson junctions cm-2 into SFQ digital circuits. This should enable circuit operation with clock frequencies above 200 GHz and place approximately 20 K gates within a radius of one clock period. As a result, complete microprocessors, including integrated memory registers, could be fabricated on a single chip. This technology was exported from the United States in accordance with the US Department of Commerce Export Administration Regulations (EAR) for ultimate destination in the United Kingdom. Diversion contrary to US law prohibited.

  17. Large-scale structure from quantum fluctuations in the early universe

    SciTech Connect

    Michael Turner

    2000-05-25

    A better understanding of the formation of large-scale structure in the Universe is arguably the most pressing question in cosmology. The most compelling and promising theoretical paradigm, Inflation + Cold Dark Matter, holds that the density inhomogeneities that seeded the formation of structure in the Universe originated from quantum fluctuations arising during inflation and that the bulk of the dark matter exists as slowing moving elementary particles (cold dark matter) left over from the earliest, fiery moments. Large redshift surveys (such as the SDSS and 2dF) and high-resolution measurements of CBR anisotropy (to be made by the MAP and Planck Surveyor satellites) have the potential to decisively test Inflation + Cold Dark Matter and to open a window to the very early Universe and fundamental physics.

  18. HTS cables open the window for large-scale renewables

    NASA Astrophysics Data System (ADS)

    Geschiere, A.; Willén, D.; Piga, E.; Barendregt, P.

    2008-02-01

    In a realistic approach to future energy consumption, the effects of sustainable power sources and the effects of growing welfare with increased use of electricity need to be considered. These factors lead to an increased transfer of electric energy over the networks. A dominant part of the energy need will come from expanded large-scale renewable sources. To use them efficiently over Europe, large energy transits between different countries are required. Bottlenecks in the existing infrastructure will be avoided by strengthening the network. For environmental reasons more infrastructure will be built underground. Nuon is studying the HTS technology as a component to solve these challenges. This technology offers a tremendously large power transport capacity as well as the possibility to reduce short circuit currents, making integration of renewables easier. Furthermore, power transport will be possible at lower voltage levels, giving the opportunity to upgrade the existing network while re-using it. This will result in large cost savings while reaching the future energy challenges. In a 6 km backbone structure in Amsterdam Nuon wants to install a 50 kV HTS Triax cable for a significant increase of the transport capacity, while developing its capabilities. Nevertheless several barriers have to be overcome.

  19. Semiconductor nanocrystal quantum dot synthesis approaches towards large-scale industrial production for energy applications

    DOE PAGES

    Hu, Michael Z.; Zhu, Ting

    2015-12-04

    This study reviews the experimental synthesis and engineering developments that focused on various green approaches and large-scale process production routes for quantum dots. Fundamental process engineering principles were illustrated. In relation to the small-scale hot injection method, our discussions focus on the non-injection route that could be scaled up with engineering stir-tank reactors. In addition, applications that demand to utilize quantum dots as "commodity" chemicals are discussed, including solar cells and solid-state lightings.

  20. Oscillatory barrier-assisted Langmuir-Blodgett deposition of large-scale quantum dot monolayers

    NASA Astrophysics Data System (ADS)

    Xu, Shicheng; Dadlani, Anup L.; Acharya, Shinjita; Schindler, Peter; Prinz, Fritz B.

    2016-03-01

    Depositing continuous, large-scale quantum dot films with low pinhole density is an inevitable but nontrivial step for studying their properties for applications in catalysis, electronic devices, and optoelectronics. This rising interest in high-quality quantum dot films has provided research impetus to improve the deposition technique. We show that by incorporating oscillatory barriers in the commonly used Langmuir-Blodgett method, large-scale monolayers of quantum dots with full coverage up to several millimeters have been achieved. With assistance of perturbation provided by the oscillatory barriers, the film has been shown to relax towards thermal equilibrium, and this physical process has been supported by molecular dynamics simulation. In addition, time evolution of dilatational moduli has been shown to give a clear indication of the film morphology and its stability.

  1. Direct measurement of large-scale quantum states via expectation values of non-Hermitian matrices

    NASA Astrophysics Data System (ADS)

    Bolduc, Eliot; Gariepy, Genevieve; Leach, Jonathan

    2016-01-01

    In quantum mechanics, predictions are made by way of calculating expectation values of observables, which take the form of Hermitian operators. Non-Hermitian operators, however, are not necessarily devoid of physical significance, and they can play a crucial role in the characterization of quantum states. Here we show that the expectation values of a particular set of non-Hermitian matrices, which we call column operators, directly yield the complex coefficients of a quantum state vector. We provide a definition of the state vector in terms of measurable quantities by decomposing these column operators into observables. The technique we propose renders very-large-scale quantum states significantly more accessible in the laboratory, as we demonstrate by experimentally characterizing a 100,000-dimensional entangled state. This represents an improvement of two orders of magnitude with respect to previous phase-and-amplitude characterizations of discrete entangled states.

  2. Direct measurement of large-scale quantum states via expectation values of non-Hermitian matrices

    PubMed Central

    Bolduc, Eliot; Gariepy, Genevieve; Leach, Jonathan

    2016-01-01

    In quantum mechanics, predictions are made by way of calculating expectation values of observables, which take the form of Hermitian operators. Non-Hermitian operators, however, are not necessarily devoid of physical significance, and they can play a crucial role in the characterization of quantum states. Here we show that the expectation values of a particular set of non-Hermitian matrices, which we call column operators, directly yield the complex coefficients of a quantum state vector. We provide a definition of the state vector in terms of measurable quantities by decomposing these column operators into observables. The technique we propose renders very-large-scale quantum states significantly more accessible in the laboratory, as we demonstrate by experimentally characterizing a 100,000-dimensional entangled state. This represents an improvement of two orders of magnitude with respect to previous phase-and-amplitude characterizations of discrete entangled states. PMID:26780858

  3. Direct measurement of large-scale quantum states via expectation values of non-Hermitian matrices.

    PubMed

    Bolduc, Eliot; Gariepy, Genevieve; Leach, Jonathan

    2016-01-01

    In quantum mechanics, predictions are made by way of calculating expectation values of observables, which take the form of Hermitian operators. Non-Hermitian operators, however, are not necessarily devoid of physical significance, and they can play a crucial role in the characterization of quantum states. Here we show that the expectation values of a particular set of non-Hermitian matrices, which we call column operators, directly yield the complex coefficients of a quantum state vector. We provide a definition of the state vector in terms of measurable quantities by decomposing these column operators into observables. The technique we propose renders very-large-scale quantum states significantly more accessible in the laboratory, as we demonstrate by experimentally characterizing a 100,000-dimensional entangled state. This represents an improvement of two orders of magnitude with respect to previous phase-and-amplitude characterizations of discrete entangled states. PMID:26780858

  4. On the large-scale structures formed by wakes of open cosmic strings

    NASA Technical Reports Server (NTRS)

    Hara, Tetsuya; Morioka, Shoji; Miyoshi, Shigeru

    1990-01-01

    Large-scale structures of the universe have been variously described as sheetlike, filamentary, cellular, bubbles or spongelike. Recently cosmic strings became one of viable candidates for a galaxy formation scenario, and some of the large-scale structures seem to be simply explained by the open cosmic strings. According to this scenario, sheets are wakes which are traces of moving open cosmic strings where dark matter and baryonic matter have accumulated. Filaments are intersections of such wakes and high density regions are places where three wakes intersect almost orthogonally. The wakes formed at t sub eq become the largest surface density among all wakes, where t sub eq is the epoch when matter density equals to radiation density. If we assume that there is one open cosmic string per each horizon, then it can be explained that the typical distances among wakes, filaments and clusters are also approx. 10(exp 2) Mpc. This model does not exclude a much more large scale structure. Open cosmic string may move even now and accumulate cold dark matter after its traces. However, the surface density is much smaller than the ones formed at t sub eq. From this model, it is expected that the typical high density region will have extended features such as six filaments and three sheets and be surrounded by eight empty regions (voids). Here, the authors are mainly concerned with such structures and have made numerical simulations for the formation of such large scale structures.

  5. Bridging the Gap between Quantum Mechanics and Large-Scale Atomistic Simulation

    SciTech Connect

    Moriarty, J A

    2004-08-16

    The prospect of modeling across disparate length and time scales to achieve a predictive multiscale description of real materials properties has attracted widespread research interest in the last decade. To be sure, the challenges in such multiscale modeling are many, and in demanding cases, such as mechanical properties or dynamic phase transitions, multiple bridges extending from the atomic level all the way to the continuum level must be built. Although often overlooked in this process, one of the most fundamental and important problems in multiscale modeling is that of bridging the gap between first-principles quantum mechanics, from which true predictive power for real materials emanates, and the large-scale atomistic simulation of thousands or millions of atoms, which is usually essential to describe the complex atomic processes that link to higher length and time scales. For example, to model single-crystal plasticity at micron length scales via dislocation-dynamics simulations that evolve the detailed dislocation microstructure requires accurate large-scale atomistic information on the mobility and interaction of individual dislocations. Similarly, modeling the kinetics of structural phase transitions requires linking accurate large-scale atomistic information on nucleation processes with higher length and time scale growth processes.

  6. Robust predictions for the large-scale cosmological power deficit from primordial quantum nonequilibrium

    NASA Astrophysics Data System (ADS)

    Colin, Samuel; Valentini, Antony

    2016-04-01

    The de Broglie-Bohm pilot-wave formulation of quantum theory allows the existence of physical states that violate the Born probability rule. Recent work has shown that in pilot-wave field theory on expanding space relaxation to the Born rule is suppressed for long-wavelength field modes, resulting in a large-scale power deficit ξ(k) which for a radiation-dominated expansion is found to have an approximate inverse-tangent dependence on k (assuming that the width of the initial distribution is smaller than the width of the initial Born-rule distribution and that the initial quantum states are evenly-weighted superpositions of energy states). In this paper, we show that the functional form of ξ(k) is robust under changes in the initial nonequilibrium distribution — subject to the limitation of a subquantum width — as well as under the addition of an inflationary era at the end of the radiation-dominated phase. In both cases, the predicted deficit ξ(k) remains an inverse-tangent function of k. Furthermore, with the inflationary phase the dependence of the fitting parameters on the number of superposed pre-inflationary energy states is comparable to that found previously. Our results indicate that, for the assumed broad class of initial conditions, an inverse-tangent power deficit is likely to be a fairly general and robust signature of quantum relaxation in the early universe.

  7. Integrated Technologies for Large-Scale Trapped-Ion Quantum Information Processing

    NASA Astrophysics Data System (ADS)

    Sorace-Agaskar, C.; Bramhavar, S.; Kharas, D.; Mehta, K. K.; Loh, W.; Panock, R.; Bruzewicz, C. D.; McConnell, R.; Ram, R. J.; Sage, J. M.; Chiaverini, J.

    2016-05-01

    Atomic ions trapped and controlled using electromagnetic fields hold great promise for practical quantum information processing due to their inherent coherence properties and controllability. However, to realize this promise, the ability to maintain and manipulate large-scale systems is required. We present progress toward the development of, and proof-of-principle demonstrations and characterization of, several technologies that can be integrated with ion-trap arrays on-chip to enable such scaling to practically useful sizes. Of particular use are integrated photonic elements for routing and focusing light throughout a chip without the need for free-space optics. The integration of CMOS electronics and photo-detectors for on-chip control and readout, and methods for monolithic fabrication and wafer-scale integration to incorporate these capabilities into tile-able 2D ion-trap array cells, are also explored.

  8. Large-scale quantum mechanical simulations of high-Z metals

    SciTech Connect

    Yang, L H; Hood, R; Pask, J; Klepeis, J

    2007-01-03

    High-Z metals constitute a particular challenge for large-scale ab initio calculations, as they require high resolution due to the presence of strongly localized states and require many eigenstates to be computed due to the large number of electrons and need to accurately resolve the Fermi surface. Here, we report recent findings on high-Z materials, using an efficient massively parallel planewave implementation on some of the largest computational architectures currently available. We discuss the particular architectures employed and methodological advances required to harness them effectively. We present a pair-correlation function for U, calculated using quantum molecular dynamics, and discuss relaxations of Pu atoms in the vicinity of defects in aged and alloyed Pu. We find that the self-irradiation associated with aging has a negligible effect on the compressibility of Pu relative to other factors such as alloying.

  9. Large Scale Synthesis and Light Emitting Fibers of Tailor-Made Graphene Quantum Dots

    PubMed Central

    Park, Hun; Hyun Noh, Sung; Hye Lee, Ji; Jun Lee, Won; Yun Jaung, Jae; Geol Lee, Seung; Hee Han, Tae

    2015-01-01

    Graphene oxide (GO), which is an oxidized form of graphene, has a mixed structure consisting of graphitic crystallites of sp2 hybridized carbon and amorphous regions. In this work, we present a straightforward route for preparing graphene-based quantum dots (GQDs) by extraction of the crystallites from the amorphous matrix of the GO sheets. GQDs with controlled functionality are readily prepared by varying the reaction temperature, which results in precise tunability of their optical properties. Here, it was concluded that the tunable optical properties of GQDs are a result of the different fraction of chemical functionalities present. The synthesis approach presented in this paper provides an efficient strategy for achieving large-scale production and long-time optical stability of the GQDs, and the hybrid assembly of GQD and polymer has potential applications as photoluminescent fibers or films. PMID:26383257

  10. Comparison of the KAMELEON fire model to large-scale open pool fire data

    SciTech Connect

    Nicolette, V.F.; Gritzo, L.A.; Holen, J.; Magnussen, B.F.

    1994-06-01

    A comparison of the KAMELEON Fire model to large-scale open pool fire experimental data is presented. The model was used to calculate large-scale JP-4 pool fires with and without wind, and with and without large objects in the fire. The effect of wind and large objects on the fire environment is clearly seen. For the pool fire calculations without any object in the fire, excellent agreement is seen in the location of the oxygen-starved region near the pool center. Calculated flame temperatures are about 200--300 K higher than measured. This results in higher heat fluxes back to the fuel pool and higher fuel evaporation rates (by a factor of 2). Fuel concentrations at lower elevations and peak soot concentrations are in good agreement with data. For pool fire calculations with objects, similar trends in the fire environment are observed. Excellent agreement is seen in the distribution of the heat flux around a cylindrical calorimeter in a rectangular pool with wind effects. The magnitude of the calculated heat flux to the object is high by a factor of 2 relative to the test data, due to the higher temperatures calculated. For the case of a large flat plate adjacent to a circular pool, excellent qualitative agreement is seen in the predicted and measured flame shapes as a function of wind.

  11. Quantum Celestial Mechanics: Large-scale Gravitational Quantization States in Galaxies and the Universe

    NASA Astrophysics Data System (ADS)

    Preston, Howard G.; Potter, Franklin

    2006-03-01

    We report a new theory of celestial mechanics for gravitationally bound systems based upon a gravitational wave equation derived from the general relativistic Hamilton-Jacobi equation. The single ad hoc assumption is that the large-scale physical properties depend only on the ratio of the bound system's total angular momentum to its total mass. The theory predicts quantization states for the Solar System and for galaxies. The galactic quantization determines the energy and angular momentum eigenstates without requiring dark matter, and predicts expressions for the galactic disk rotation velocity, the baryonic Tully-Fisher relation, the MOND acceleration parameter, the large-angle gravitational lensing, and the shape, stability and number of arms in spiral galaxies. Applied to the universe, the theory has a repulsive effective gravitational potential that predicts a new Hubble relation and explains the observed apparent acceleration of distant supernovae with the matter/energy density of the universe at the critical density with only about 5% matter content. We suggest a laboratory experiment with a torsion bar near a rotating mass. This theory is not quantum gravity.

  12. Neural ensemble communities: open-source approaches to hardware for large-scale electrophysiology.

    PubMed

    Siegle, Joshua H; Hale, Gregory J; Newman, Jonathan P; Voigts, Jakob

    2015-06-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is 'open' or 'closed': that is, whether or not the system's schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  13. Low Pressure Seeder Development for PIV in Large Scale Open Loop Wind Tunnels

    NASA Astrophysics Data System (ADS)

    Schmit, Ryan

    2010-11-01

    A low pressure seeding techniques have been developed for Particle Image Velocimetry (PIV) in large scale wind tunnel facilities was performed at the Subsonic Aerodynamic Research Laboratory (SARL) facility at Wright-Patterson Air Force Base. The SARL facility is an open loop tunnel with a 7 by 10 foot octagonal test section that has 56% optical access and the Mach number varies from 0.2 to 0.5. A low pressure seeder sprayer was designed and tested in the inlet of the wind tunnel. The seeder sprayer was designed to produce an even and uniform distribution of seed while reducing the seeders influence in the test section. ViCount Compact 5000 using Smoke Oil 180 was using as the seeding material. The results show that this low pressure seeder does produce streaky seeding but excellent PIV images are produced.

  14. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology

    PubMed Central

    Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob

    2014-01-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  15. Large-scale diversity patterns of cephalopods in the Atlantic open ocean and deep sea.

    PubMed

    Rosa, Rui; Dierssen, Heidi M; Gonzalez, Liliana; Seibel, Brad A

    2008-12-01

    Although the oceans cover 70% of the Earth's surface and the open ocean is by far the largest ecosystem on the planet, our knowledge regarding diversity patterns of pelagic fauna is very scarce. Here, we examine large-scale latitudinal and depth-related patterns of pelagic cephalopod richness in the Atlantic Ocean in relation to ambient thermal and productive energy availability. Diversity, across 17 biogeochemical regions in the open ocean, does not decline monotonically with latitude, but is positively correlated to the availability of oceanic resources. Mean net primary productivity (NPP), determined from ocean color satellite imagery, explains 37% of the variance in species richness. Outside the poles, the range in NPP explains over 40% of the variability. This suggests that cephalopods are well adapted to the spatial patchiness and seasonality of open-ocean resources. Pelagic richness is also correlated to sea surface temperature, with maximum richness occurring around 15 degrees C and decreasing with both colder and warmer temperatures. Both pelagic and benthos-associated diversities decline sharply from sublittoral and epipelagic regions to the slope and bathypelagic habitats and then steadily to abyssal depths. Thus, higher energy availability at shallow depths seems to promote diversification rates. This strong depth-related trend in diversity also emphasizes the greater influence of the sharp vertical thermal gradient than the smoother and more seasonal horizontal (latitudinal) one on marine diversity.

  16. An Open-Source Galaxy Redshift Survey Simulator for next-generation Large Scale Structure Surveys

    NASA Astrophysics Data System (ADS)

    Seijak, Uros

    Galaxy redshift surveys produce three-dimensional maps of the galaxy distribution. On large scales these maps trace the underlying matter fluctuations in a relatively simple manner, so that the properties of the primordial fluctuations along with the overall expansion history and growth of perturbations can be extracted. The BAO standard ruler method to measure the expansion history of the universe using galaxy redshift surveys is thought to be robust to observational artifacts and understood theoretically with high precision. These same surveys can offer a host of additional information, including a measurement of the growth rate of large scale structure through redshift space distortions, the possibility of measuring the sum of neutrino masses, tighter constraints on the expansion history through the Alcock-Paczynski effect, and constraints on the scale-dependence and non-Gaussianity of the primordial fluctuations. Extracting this broadband clustering information hinges on both our ability to minimize and subtract observational systematics to the observed galaxy power spectrum, and our ability to model the broadband behavior of the observed galaxy power spectrum with exquisite precision. Rapid development on both fronts is required to capitalize on WFIRST's data set. We propose to develop an open-source computational toolbox that will propel development in both areas by connecting large scale structure modeling and instrument and survey modeling with the statistical inference process. We will use the proposed simulator to both tailor perturbation theory and fully non-linear models of the broadband clustering of WFIRST galaxies and discover novel observables in the non-linear regime that are robust to observational systematics and able to distinguish between a wide range of spatial and dynamic biasing models for the WFIRST galaxy redshift survey sources. We have demonstrated the utility of this approach in a pilot study of the SDSS-III BOSS galaxies, in which we

  17. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  18. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  19. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.

    2010-09-01

    The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full

  20. The large-scale digital cell analysis system: an open system for nonperturbing live cell imaging.

    PubMed

    Davis, Paul J; Kosmacek, Elizabeth A; Sun, Yuansheng; Ianzini, Fiorenza; Mackey, Michael A

    2007-12-01

    The Large-Scale Digital Cell Analysis System (LSDCAS) was designed to provide a highly extensible open source live cell imaging system. Analysis of cell growth data has demonstrated a lack of perturbation in cells imaged using LSDCAS, through reference to cell growth data from cells growing in CO(2) incubators. LSDCAS consists of data acquisition, data management and data analysis software, and is currently a Core research facility at the Holden Comprehensive Cancer Center at the University of Iowa. Using LSDCAS analysis software, this report and others show that although phase-contrast imaging has no apparent effect on cell growth kinetics and viability, fluorescent image acquisition in the cell lines tested caused a measurable level of growth perturbation using LSDCAS. This report describes the current design of the system, reasons for the implemented design, and details its basic functionality. The LSDCAS software runs on the GNU/Linux operating system, and provides easy to use, graphical programs for data acquisition and quantitative analysis of cells imaged with phase-contrast or fluorescence microscopy (alone or in combination), and complete source code is freely available under the terms of the GNU Public Software License at the project website (http://lsdcas.engineering.uiowa.edu). PMID:18045324

  1. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  2. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    NASA Astrophysics Data System (ADS)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.

  3. User Friendly Open GIS Tool for Large Scale Data Assimilation - a Case Study of Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Gupta, P. K.

    2012-08-01

    Open source software (OSS) coding has tremendous advantages over proprietary software. These are primarily fuelled by high level programming languages (JAVA, C++, Python etc...) and open source geospatial libraries (GDAL/OGR, GEOS, GeoTools etc.). Quantum GIS (QGIS) is a popular open source GIS package, which is licensed under GNU GPL and is written in C++. It allows users to perform specialised tasks by creating plugins in C++ and Python. This research article emphasises on exploiting this capability of QGIS to build and implement plugins across multiple platforms using the easy to learn - Python programming language. In the present study, a tool has been developed to assimilate large spatio-temporal datasets such as national level gridded rainfall, temperature, topographic (digital elevation model, slope, aspect), landuse/landcover and multi-layer soil data for input into hydrological models. At present this tool has been developed for Indian sub-continent. An attempt is also made to use popular scientific and numerical libraries to create custom applications for digital inclusion. In the hydrological modelling calibration and validation are important steps which are repetitively carried out for the same study region. As such the developed tool will be user friendly and used efficiently for these repetitive processes by reducing the time required for data management and handling. Moreover, it was found that the developed tool can easily assimilate large dataset in an organised manner.

  4. Opening the Black Box: Prospects for Using International Large-Scale Assessments to Explore Classroom Effects

    ERIC Educational Resources Information Center

    Schmidt, William H.; Burroughs, Nathan A.

    2013-01-01

    In this article, the authors review International Large-Scale Assessment (ILSA)-based research over the last several decades, with specific attention on cross-national analysis of mean differences between and variation within countries in mathematics education. They discuss the role of sampling design and "opportunity to learn" (OTL)…

  5. The implementation of universal quantum memory and gates based on large-scale diamond surface

    NASA Astrophysics Data System (ADS)

    Qi, Xiao-Ning; Zhang, Yong

    2016-08-01

    Nitrogen-vacancy (NV) centers implanted beneath the diamond surface have been demonstrated to be effective in the processing of controlling and reading-out. In this paper, NV center entangled with the fluorine nuclei collective ensemble is simplified to Jaynes-Cummings (JC) model. Based on this system, we discussed the implementation of quantum state storage and single-qubit quantum gate.

  6. Giant photon gain in large-scale quantum dot-circuit QED systems

    NASA Astrophysics Data System (ADS)

    Agarwalla, Bijay Kumar; Kulkarni, Manas; Mukamel, Shaul; Segal, Dvira

    2016-09-01

    Motivated by recent experiments on the generation of coherent light in engineered hybrid quantum systems, we investigate gain in a microwave photonic cavity coupled to quantum dot structures and develop concrete directions for achieving a giant amplification in photon transmission. We propose two architectures for scaling up the electronic gain medium: (i) N -double quantum dot systems and (ii) M -quantum dots arranged in series akin to a quantum cascade laser setup. In both setups, the fermionic reservoirs are voltage biased, and the quantum dots are coupled to a single-mode cavity. Optical amplification is explained based on a sum rule for the transmission function, and it is determined by an intricate competition between two different processes: charge-density response in the gain medium and cavity losses to input and output ports. The same design principle is also responsible for the corresponding giant amplification in other photonic observables, mean photon number, and emission spectrum, thereby realizing a quantum device that behaves as a giant microwave amplifier.

  7. Free energies of binding from large-scale first-principles quantum mechanical calculations: application to ligand hydration energies.

    PubMed

    Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton

    2013-08-15

    Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.

  8. Dissipative quantum computing with open quantum walks

    SciTech Connect

    Sinayskiy, Ilya; Petruccione, Francesco

    2014-12-04

    An open quantum walk approach to the implementation of a dissipative quantum computing scheme is presented. The formalism is demonstrated for the example of an open quantum walk implementation of a 3 qubit quantum circuit consisting of 10 gates.

  9. Coupling slot-waveguide cavities for large-scale quantum optical devices.

    PubMed

    Su, Chun-Hsu; Hiscocks, Mark P; Gibson, Brant C; Greentree, Andrew D; Hollenberg, Lloyd C L; Ladouceur, François

    2011-03-28

    By offering effective modal volumes significantly less than a cubic wavelength, slot-waveguide cavities offer a new in-road into strong atom-photon coupling in the visible regime. Here we explore two-dimensional arrays of coupled slot cavities which underpin designs for novel quantum emulators and polaritonic quantum phase transition devices. Specifically, we investigate the lateral coupling characteristics of diamond-air and GaP-air slot waveguides using numerically-assisted coupled-mode theory, and the longitudinal coupling properties via distributed Bragg reflectors using mode-propagation simulations. We find that slot-waveguide cavities in the Fabry-Perot arrangement can be coupled and effectively treated with a tight-binding description, and are a suitable platform for realizing Jaynes-Cummings-Hubbard physics.

  10. Large scale solution assembly of quantum dot-gold nanorod architectures with plasmon enhanced fluorescence.

    PubMed

    Nepal, Dhriti; Drummy, Lawrence F; Biswas, Sushmita; Park, Kyoungweon; Vaia, Richard A

    2013-10-22

    Tailoring the efficiency of fluorescent emission via plasmon-exciton coupling requires structure control on a nanometer length scale using a high-yield fabrication route not achievable with current lithographic techniques. These systems can be fabricated using a bottom-up approach if problems of colloidal stability and low yield can be addressed. We report progress on this pathway with the assembly of quantum dots (emitter) on gold nanorods (plasmonic units) with precisely controlled spacing, quantum dot/nanorod ratio, and long-term colloidal stability, which enables the purification and encapsulation of the assembled architecture in a protective silica shell. Overall, such controllability with nanometer precision allows one to synthesize stable, complex architectures at large volume in a rational and controllable manner. The assembled architectures demonstrate photoluminescent enhancement (5×) useful for applications ranging from biological sensing to advanced optical communication.

  11. Large scale two-dimensional arrays of magnesium diboride superconducting quantum interference devices

    SciTech Connect

    Cybart, Shane A. Dynes, R. C.; Wong, T. J.; Cho, E. Y.; Beeman, J. W.; Yung, C. S.; Moeckly, B. H.

    2014-05-05

    Magnetic field sensors based on two-dimensional arrays of superconducting quantum interference devices were constructed from magnesium diboride thin films. Each array contained over 30 000 Josephson junctions fabricated by ion damage of 30 nm weak links through an implant mask defined by nano-lithography. Current-biased devices exhibited very large voltage modulation as a function of magnetic field, with amplitudes as high as 8 mV.

  12. Time-dependent simulations of large-scale quantum mechanical processes

    SciTech Connect

    Collins, L. A.

    2002-01-01

    Time dependent linear and nonlinear equations govern the evolution of an extensive set of physical systems and processes describing, to enumerate just a few, Bose-Einstein condensates; soliton propagation in optical and photonic band-gap fibers; quantum control of atomic and molecular collisions and reactions; highly-compressed liquids; and dense and ultracold plasmas. While the media vary substantially, the basic computational procedures have many common features. We focus on the nonlinear Schrodinger equation and discuss two powerful approaches to its propagation: the Arnoldi/Lanczos(AL)l and Real Space Product Formula(RSPF)2. Both provide efficient systematic approximations to the short-time exponential propagator that moves the solution between time steps. We implement the former in a discrete variable representation (DVR)3 both in spatial grid and finite element forms and the latter in a spatial mesh with a finite difference representation of the kinetic energy operator. Both approaches require O(N) operations to propagate the wavefunction between time steps and handle multidimensional systems. We shall also draw connections with Liouville formulations used in quantum molecular dynamics simulations of large collections of atoms and molecules. After briefly outlining these formulations, we shall discuss some of the varied applications.

  13. Electrostatic embedding in large-scale first principles quantum mechanical calculations on biomolecules.

    PubMed

    Fox, Stephen J; Pittock, Chris; Fox, Thomas; Tautermann, Christofer S; Malcolm, Noj; Skylaris, Chris-Kriton

    2011-12-14

    Biomolecular simulations with atomistic detail are often required to describe interactions with chemical accuracy for applications such as the calculation of free energies of binding or chemical reactions in enzymes. Force fields are typically used for this task but these rely on extensive parameterisation which in cases can lead to limited accuracy and transferability, for example for ligands with unusual functional groups. These limitations can be overcome with first principles calculations with methods such as density functional theory (DFT) but at a much higher computational cost. The use of electrostatic embedding can significantly reduce this cost by representing a portion of the simulated system in terms of highly localised charge distributions. These classical charge distributions are electrostatically coupled with the quantum system and represent the effect of the environment in which the quantum system is embedded. In this paper we describe and evaluate such an embedding scheme in which the polarisation of the electronic density by the embedding charges occurs self-consistently during the calculation of the density. We have implemented this scheme in a linear-scaling DFT program as our aim is to treat with DFT entire biomolecules (such as proteins) and large portions of the solvent. We test this approach in the calculation of interaction energies of ligands with biomolecules and solvent and investigate under what conditions these can be obtained with the same level of accuracy as when the entire system is described by DFT, for a variety of neutral and charged species. PMID:22168680

  14. Openwebglobe - AN Open Source Sdk for Creating Large-Scale Virtual Globes on a Webgl Basis

    NASA Astrophysics Data System (ADS)

    Loesch, B.; Christen, M.; Nebiker, S.

    2012-07-01

    This paper introduces the OpenWebGlobe project (www.openwebglobe.org) and the OpenWebGlobe SDK (Software Development Kit) - an open source virtual globe environment using WebGL. Unlike other (web-based) 3d geovisualisation technologies and toolkits, the OpenWebGlobe SDK not only supports the content authoring and web visualization aspects, but also the data processing functionality for generating multi-terabyte terrain, image, map and 3d point cloud data sets in high-performance and cloud-based parallel computing environments. The OpenWebGlobe architecture is described and the paper outlines the processing and the viewer functionality provided by the OpenWebGlobe SDK. It then discusses the generation and updating of a global 3d base map using OpenStreetMap data and finally presents two show cases employing the technology a) for implementing an interactive national 3d geoportal incorporating high resolution national geodata sets and b) for implementing a 3d geoinformation service supporting the real-time incorporation of 3d point cloud data.

  15. Automated tracing of open-field coronal structures for an optimized large-scale magnetic field reconstruction

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2014-12-01

    Solar Probe Plus and Solar Orbiter will provide detailed measurements in the inner heliosphere magnetically connected with the topologically complex and eruptive solar corona. Interpretation of these measurements will require accurate reconstruction of the large-scale coronal magnetic field. In a related presentation by S. Jones et al., we argue that such reconstruction can be performed using photospheric extrapolation methods constrained by white-light coronagraph images. Here, we present the image-processing component of this project dealing with an automated segmentation of fan-like coronal loop structures. In contrast to the existing segmentation codes designed for detecting small-scale closed loops in the vicinity of active regions, we focus on the large-scale geometry of the open-field coronal features observed at significant radial distances from the solar surface. The coronagraph images used for the loop segmentation are transformed into a polar coordinate system and undergo radial detrending and initial noise reduction. The preprocessed images are subject to an adaptive second order differentiation combining radial and azimuthal directions. An adjustable thresholding technique is applied to identify candidate coronagraph features associated with the large-scale coronal field. A blob detection algorithm is used to extract valid features and discard noisy data pixels. The obtained features are interpolated using higher-order polynomials which are used to derive empirical directional constraints for magnetic field extrapolation procedures based on photospheric magnetograms.

  16. Symbolic Formulation of Large-scale Open-loop Multibody Systems for Vibration Analysis Using Absolute Joint Coordinates

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Chen, Xuedong; Luo, Xin; Huang, Qingjiu

    A novel symbolic formulation is presented to model dynamics of large-scale open-loop holonomic multibody systems, by using absolute joint coordinates and via matrix transformation, instead of solving constraint equations. The resulting minimal set of second-order linear ordinary differential equations (ODEs) can be used for linear vibration analysis and control directly. The ODEs are generated in three steps. Firstly, a set of linearized ODEs are formulated in terms of absolute coordinates without considering any constraint. Secondly, an overall transform matrix representing constraint topology for the entire constrained system is generated. Finally, matrices for a minimal set of ODEs for the open-loop holonomic multibody system are obtained via matrix transformation. The correctness and efficiency of the presented algorithm are verified by numerical experiments on various cases of holonomic multibody systems with different open-loop topologies, including chain topology and tree topology. It is indicated that the proposed method can significantly improve efficiency without losing computational accuracy.

  17. Lars Onsager Prize Talk: 1+1d conformal field theories as natural languages for asymptotically large-scale quantum computing

    NASA Astrophysics Data System (ADS)

    Friedan, Daniel

    2010-03-01

    An abstract argument is offered that the ideal physical systems for asymptotically large-scale quantum computers are near-critical quantum circuits, critical in the bulk, whose bulk universality classes are described by 1+1d conformal field theories. One in particular -- the Monster conformal field theory -- is especially ideal, because all of its bulk couplings are irrelevant.

  18. Agent-based Large-Scale Emergency Evacuation Using Real-Time Open Government Data

    SciTech Connect

    Lu, Wei; Liu, Cheng; Bhaduri, Budhendra L

    2014-01-01

    The open government initiatives have provided tremendous data resources for the transportation system and emergency services in urban areas. This paper proposes a traffic simulation framework using high temporal resolution demographic data and real time open government data for evacuation planning and operation. A comparison study using real-world data in Seattle, Washington is conducted to evaluate the framework accuracy and evacuation efficiency. The successful simulations of selected area prove the concept to take advantage open government data, open source data, and high resolution demographic data in emergency management domain. There are two aspects of parameters considered in this study: user equilibrium (UE) conditions of traffic assignment model (simple Non-UE vs. iterative UE) and data temporal resolution (Daytime vs. Nighttime). Evacuation arrival rate, average travel time, and computation time are adopted as Measure of Effectiveness (MOE) for evacuation performance analysis. The temporal resolution of demographic data has significant impacts on urban transportation dynamics during evacuation scenarios. Better evacuation performance estimation can be approached by integrating both Non-UE and UE scenarios. The new framework shows flexibility in implementing different evacuation strategies and accuracy in evacuation performance. The use of this framework can be explored to day-to-day traffic assignment to support daily traffic operations.

  19. Novel approach for extinguishing large-scale coal fires using gas-liquid foams in open pit mines.

    PubMed

    Lu, Xinxiao; Wang, Deming; Qin, Botao; Tian, Fuchao; Shi, Guangyi; Dong, Shuaijun

    2015-12-01

    Coal fires are a serious threat to the workers' security and safe production in open pit mines. The coal fire source is hidden and innumerable, and the large-area cavity is prevalent in the coal seam after the coal burned, causing the conventional extinguishment technology difficult to work. Foams are considered as an efficient means of fire extinguishment in these large-scale workplaces. A noble foam preparation method is introduced, and an original design of cavitation jet device is proposed to add foaming agent stably. The jet cavitation occurs when the water flow rate and pressure ratio reach specified values. Through self-building foaming system, the high performance foams are produced and then infused into the blast drilling holes at a large flow. Without complicated operation, this system is found to be very suitable for extinguishing large-scale coal fires. Field application shows that foam generation adopting the proposed key technology makes a good fire extinguishment effect. The temperature reduction using foams is 6-7 times higher than water, and CO concentration is reduced from 9.43 to 0.092‰ in the drilling hole. The coal fires are controlled successfully in open pit mines, ensuring the normal production as well as the security of personnel and equipment.

  20. Sustainability of Open-Source Software Organizations as Underpinning for Sustainable Interoperability on Large Scales

    NASA Astrophysics Data System (ADS)

    Fulker, D. W.; Gallagher, J. H. R.

    2015-12-01

    OPeNDAP's Hyrax data server is an open-source framework fostering interoperability via easily-deployed Web services. Compatible with solutions listed in the (PA001) session description—federation, rigid standards and brokering/mediation—the framework can support tight or loose coupling, even with dependence on community-contributed software. Hyrax is a Web-services framework with a middleware-like design and a handler-style architecture that together reduce the interoperability challenge (for N datatypes and M user contexts) to an O(N+M) problem, similar to brokering. Combined with an open-source ethos, this reduction makes Hyrax a community tool for gaining interoperability. E.g., in its response to the Big Earth Data Initiative (BEDI), NASA references OPeNDAP-based interoperability. Assuming its suitability, the question becomes: how sustainable is OPeNDAP, a small not-for-profit that produces open-source software, i.e., has no software-sales? In other words, if geoscience interoperability depends on OPeNDAP and similar organizations, are those entities in turn sustainable? Jim Collins (in Good to Great) highlights three questions that successful companies can answer (paraphrased here): What is your passion? Where is your world-class excellence? What drives your economic engine? We attempt to shed light on OPeNDAP sustainability by examining these. Passion: OPeNDAP has a focused passion for improving the effectiveness of scientific data sharing and use, as deeply-cooperative community endeavors. Excellence: OPeNDAP has few peers in remote, scientific data access. Skills include computer science with experience in data science, (operational, secure) Web services, and software design (for servers and clients, where the latter vary from Web pages to standalone apps and end-user programs). Economic Engine: OPeNDAP is an engineering services organization more than a product company, despite software being key to OPeNDAP's reputation. In essence, provision of

  1. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    NASA Astrophysics Data System (ADS)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  2. Open source large-scale high-resolution environmental modelling with GEMS

    NASA Astrophysics Data System (ADS)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  3. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    SciTech Connect

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  4. Design of a large-scale vertical open-structure cylindrical shield employing magnetic shaking

    NASA Astrophysics Data System (ADS)

    Sasada, Ichiro; Paperno, Eugene; Koide, Hiroyuki

    2000-05-01

    The shield developed consists of four concentric magnetic shells positioned on the outer surfaces of paper pipes of ˜2.7 m length, ˜1 cm thickness, and with outer diameters of 67, 72, 82.2, and 97.4 cm, respectively. The first (innermost) shell is a Permalloy shell of 2.1 mm thickness and 1.8 m length. The second, third, and fourth shells are made of ˜50 mm wide, ˜22 μm thick Metglas 2705M amorphous ribbons. The second shell, which is a 2.2 m long helical structure, consists of 48 layers of Metglas ribbon divided into four equal sections by ˜1 cm thick flexible Styrofoam sheets. The third shell, 2.43 m in length, and fourth shell, 2.7 m in length, consist of 26 and 30 layers, respectively. A thin polyethylene film is tightly wound on each section of the second shell as well as on the third and fourth shells. It increases the friction between the Metglas ribbons and prevents them from sliding down; there is no foreign material in between the layers of the ribbon. All shells are enclosed by toroidal coils which are used to demagnetize the Permalloy shell and to apply magnetic shaking to the amorphous magnetic shells. The gross weight of the shield is ˜400 kg including ˜65 kg of Permalloy and ˜110 kg of Metglas. An ˜105 transverse shielding factor and a relatively large ˜380 axial shielding factor, despite the effect of the openings, are achieved for a 10 μT external field in the extremely low frequency region. The measured shaking leakage and magnetic noise field strengths at the shield's center are less than 1 nT. As these low field strengths, it is possible to operate highly sensitive SQUID magnetometers for biomagnetic measurements.

  5. Using CyberShake Workflows to Manage Big Seismic Hazard Data on Large-Scale Open-Science HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2015-12-01

    The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and

  6. Why we need a large-scale open metadata initiative in health informatics - a vision paper on open data models for clinical phenotypes.

    PubMed

    Dugas, Martin

    2013-01-01

    Clinical phenotypes are very complex and not well described. For instance, more than 100.000 biomedical concepts are needed to describe clinical properties of patients. At present, information systems dealing with clinical phenotype data are based on secret, heterogeneous and incompatible data models. This is the root cause for the well-known grand challenge of semantic interoperability in healthcare: data exchange and analysis of medical information systems has major limitations. This problem slows down medical progressand wastes time of health care professionals. A large-scale open metadata initiative can foster exchange, discussion and consensus regarding data models for clinical phenotypes. This would be an important contribution to improve information systems in healthcare and to solve the grand challenge of semantic interoperability. PMID:23920688

  7. Three-Dimensional Architecture at Chip Level for Large-Scale-Integration of Superconducting Quantum Electronic Devices

    NASA Astrophysics Data System (ADS)

    Göppl, Martin; Kurpiers, Philipp; Wallraff, Andreas

    We propose a novel way to realize three-dimensional circuit QED systems at chip level. System components such as qubits, transmission lines, capacitors, inductors or cross-overs can be implemented as suspended, electromagnetically shielded and optionally, as hermetically sealed structures. Compared to known state-of-the-art devices, volumes of dielectrics penetrated by electromagnetic fields can be drastically reduced. Our intention is to harness process technologies for very-large-scale-integration, reliably applied and improved over decades in micro-sensor- and semiconductor industry, for the realization of highly integrated circuit QED systems. Process capabilities are demonstrated by fabricating first exploratory devices using the back-end-of-line part of a commercial 180 nm CMOS foundry process in conjunction with HF vapor phase release etching.

  8. Large-scale GaP-on-diamond integrated photonics platform for NV center-based quantum information

    NASA Astrophysics Data System (ADS)

    Gould, Michael; Chakravarthi, Srivatsa; Christen, Ian R.; Thomas, Nicole; Dadgostar, Shabnam; Song, Yuncheng; Lee, Minjoo Larry; Hatami, Fariba; Fu, Kai-Mei C.

    2016-03-01

    We present chip-scale transmission measurements for three key components of a GaP-on-diamond integrated photonics platform: waveguide-coupled disk resonators, directional couplers, and grating couplers. We also present proof-of-principle measurements demonstrating nitrogen-vacancy (NV) center emission coupled into selected devices. The demonstrated device performance, uniformity and yield place the platform in a strong position to realize measurement-based quantum information protocols utilizing the NV center in diamond.

  9. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation. PMID:23230155

  10. Early signatures of large-scale field line opening. Multi-wavelength analysis of features connected with a "halo" CME event

    NASA Astrophysics Data System (ADS)

    Pohjolainen, S.; Vilmer, N.; Khan, J. I.; Hillaris, A. E.

    2005-04-01

    A fast "halo"-type coronal mass ejection (CME) associated with a two-ribbon flare, GOES class M 1.3, was observed on February 8, 2000. Soft X-ray and EUV images revealed several loop ejections and one wave-like moving front that started from a remote location, away from the flare core region. A radio type-II burst was observed near the trajectory of the moving soft X-ray front, although association with the CME itself cannot be ruled out. Large-scale dimmings were observed in EUV and soft X-rays, both in the form of disappearing transequatorial loops. We can pinpoint the time and the location of the first large-scale field-line opening by tracing the electron propagation paths above the active region and along the transequatorial loop system, in which large-scale mass depletion later took place. The immediate start of a type-IV burst (interpreted as an upward moving structure) which was located over a soft X-ray dimming region, confirms that the CME had lifted off. We compare these signatures with those of another halo CME event observed on May 2, 1998, and discuss the possible connections with the "magnetic breakout" model.

  11. Large scale infrared imaging of tissue micro arrays (TMAs) using a tunable Quantum Cascade Laser (QCL) based microscope.

    PubMed

    Bassan, Paul; Weida, Miles J; Rowlette, Jeremy; Gardner, Peter

    2014-08-21

    Chemical imaging in the field of vibrational spectroscopy is developing into a promising tool to complement digital histopathology. Applications include screening of biopsy tissue via automated recognition of tissue/cell type and disease state based on the chemical information from the spectrum. For integration into clinical practice, data acquisition needs to be speeded up to implement a rack based system where specimens are rapidly imaged to compete with current visible scanners where 100's of slides can be scanned overnight. Current Fourier transform infrared (FTIR) imaging with focal plane array (FPA) detectors are currently the state-of-the-art instrumentation for infrared absorption chemical imaging, however recent development in broadly tunable lasers in the mid-IR range is considered the most promising potential candidate for next generation microscopes. In this paper we test a prototype quantum cascade laser (QCL) based spectral imaging microscope with a focus on discrete frequency chemical imaging. We demonstrate how a protein chemical image of the amide I band (1655 cm(-1)) of a 2 × 2.4 cm(2) breast tissue microarray (TMA) containing over 200 cores can be measured in 9 min. This result indicates that applications requiring chemical images from a few key wavelengths would be ideally served by laser-based microscopes.

  12. Large scale infrared imaging of tissue micro arrays (TMAs) using a tunable Quantum Cascade Laser (QCL) based microscope.

    PubMed

    Bassan, Paul; Weida, Miles J; Rowlette, Jeremy; Gardner, Peter

    2014-08-21

    Chemical imaging in the field of vibrational spectroscopy is developing into a promising tool to complement digital histopathology. Applications include screening of biopsy tissue via automated recognition of tissue/cell type and disease state based on the chemical information from the spectrum. For integration into clinical practice, data acquisition needs to be speeded up to implement a rack based system where specimens are rapidly imaged to compete with current visible scanners where 100's of slides can be scanned overnight. Current Fourier transform infrared (FTIR) imaging with focal plane array (FPA) detectors are currently the state-of-the-art instrumentation for infrared absorption chemical imaging, however recent development in broadly tunable lasers in the mid-IR range is considered the most promising potential candidate for next generation microscopes. In this paper we test a prototype quantum cascade laser (QCL) based spectral imaging microscope with a focus on discrete frequency chemical imaging. We demonstrate how a protein chemical image of the amide I band (1655 cm(-1)) of a 2 × 2.4 cm(2) breast tissue microarray (TMA) containing over 200 cores can be measured in 9 min. This result indicates that applications requiring chemical images from a few key wavelengths would be ideally served by laser-based microscopes. PMID:24965124

  13. Large-scale atomistic and quantum-mechanical simulations of a Nafion membrane: Morphology, proton solvation and charge transport

    PubMed Central

    Komarov, Pavel V; Khokhlov, Alexei R

    2013-01-01

    Summary Atomistic and first-principles molecular dynamics simulations are employed to investigate the structure formation in a hydrated Nafion membrane and the solvation and transport of protons in the water channel of the membrane. For the water/Nafion systems containing more than 4 million atoms, it is found that the observed microphase-segregated morphology can be classified as bicontinuous: both majority (hydrophobic) and minority (hydrophilic) subphases are 3D continuous and organized in an irregular ordered pattern, which is largely similar to that known for a bicontinuous double-diamond structure. The characteristic size of the connected hydrophilic channels is about 25–50 Å, depending on the water content. A thermodynamic decomposition of the potential of mean force and the calculated spectral densities of the hindered translational motions of cations reveal that ion association observed with decreasing temperature is largely an entropic effect related to the loss of low-frequency modes. Based on the results from the atomistic simulation of the morphology of Nafion, we developed a realistic model of ion-conducting hydrophilic channel within the Nafion membrane and studied it with quantum molecular dynamics. The extensive 120 ps-long density functional theory (DFT)-based simulations of charge migration in the 1200-atom model of the nanochannel consisting of Nafion chains and water molecules allowed us to observe the bimodality of the van Hove autocorrelation function, which provides the direct evidence of the Grotthuss bond-exchange (hopping) mechanism as a significant contributor to the proton conductivity. PMID:24205452

  14. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  15. A new technology of CO2 supplementary for microalgae cultivation on large scale - A spraying absorption tower coupled with an outdoor open runway pond.

    PubMed

    Zhang, Chun-Dan; Li, Wei; Shi, Yun-Hai; Li, Yuan-Guang; Huang, Jian-Ke; Li, Hong-Xia

    2016-06-01

    An effective CO2 supply system of a spraying absorption tower combined with an outdoor ORWP (open raceway pond) for microalgae photoautotrophic cultivation is developed in this paper. The microalgae yield, productivity and CO2 fixation efficiency were investigated, and compared with those of bubbling method. The maximum yield and productivity of biomass were achieved 0.927gL(-1) and 0.114gL(-1)day(-1), respectively. The fixation efficiency of CO2 by microalgae with the spraying tower reached 50%, whereas only 11.17% for bubbling method. Pure CO2 can be used in the spraying absorption tower, and the flow rate was only about one third of the bubbling cultivation. It shows that this new method of quantifiable control CO2 supply can meet the requirements of the growth of microalgae cultivation on large-scale.

  16. MausDB: An open source application for phenotype data and mouse colony management in large-scale mouse phenotyping projects

    PubMed Central

    Maier, Holger; Lengger, Christoph; Simic, Bruno; Fuchs, Helmut; Gailus-Durner, Valérie; Hrabé de Angelis, Martin

    2008-01-01

    Background Large-scale, comprehensive and standardized high-throughput mouse phenotyping has been established as a tool of functional genome research by the German Mouse Clinic and others. In all these projects, vast amounts of data are continuously generated and need to be stored, prepared for data-mining procedures and eventually be made publicly available. Thus, central storage and integrated management of mouse phenotype data, genotype data, metadata and linked external data are highly important. Requirements most probably depend on the individual mouse housing unit or project and the demand for either very specific individual database solutions or very flexible solutions that can be easily adapted to local demands. Not every group has the resources and/or the know-how to develop software for this purpose. A database application has been developed for the German Mouse Clinic in order to meet all requirements mentioned above. Results We present MausDB, the German Mouse Clinic web-based database application that integrates standard mouse colony management, phenotyping workflow scheduling features and mouse phenotyping result data management. It links mouse phenotype data with genotype data, metadata and external data such as public web databases, which is a prerequisite for comprehensive data analysis and mining. We describe how this can be achieved with a lean and user-friendly system built on open standards. Conclusion MausDB is suited for large-scale, high-throughput phenotyping facilities but can also be used exclusively for mouse colony management within smaller units or projects. The system is successfully used as the primary mouse and data management tool of the German Mouse Clinic and other mouse facilities. We offer MausDB to the scientific community as open source software to provide a system for storage of data from functional genomics projects in a well-structured, easily accessible form. PMID:18366799

  17. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  18. Perturbative approach to Markovian open quantum systems

    NASA Astrophysics Data System (ADS)

    Li, Andy C. Y.; Petruccione, F.; Koch, Jens

    2014-05-01

    The exact treatment of Markovian open quantum systems, when based on numerical diagonalization of the Liouville super-operator or averaging over quantum trajectories, is severely limited by Hilbert space size. Perturbation theory, standard in the investigation of closed quantum systems, has remained much less developed for open quantum systems where a direct application to the Lindblad master equation is desirable. We present such a perturbative treatment which will be useful for an analytical understanding of open quantum systems and for numerical calculation of system observables which would otherwise be impractical.

  19. Quasiequilibria in open quantum systems

    SciTech Connect

    Walls, Jamie D.

    2010-03-15

    In this work, the steady-state or quasiequilibrium resulting from periodically modulating the Liouvillian of an open quantum system, L-circumflex-circumflex(t), is investigated. It is shown that differences between the quasiequilibrium and the instantaneous equilibrium occur due to nonadiabatic contributions from the gauge field connecting the instantaneous eigenstates of L-circumflex-circumflex(t) to a fixed basis. These nonadiabatic contributions are shown to result in an additional rotation and/or depolarization for a single spin-1/2 in a time-dependent magnetic field and to affect the thermal mixing of two coupled spins interacting with a time-dependent magnetic field.

  20. Duality quantum algorithm efficiently simulates open quantum systems

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-07-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.

  1. Duality quantum algorithm efficiently simulates open quantum systems.

    PubMed

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  2. Duality quantum algorithm efficiently simulates open quantum systems.

    PubMed

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-07-28

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.

  3. Duality quantum algorithm efficiently simulates open quantum systems

    PubMed Central

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  4. Large Scale IR Evaluation

    ERIC Educational Resources Information Center

    Pavlu, Virgil

    2008-01-01

    Today, search engines are embedded into all aspects of digital world: in addition to Internet search, all operating systems have integrated search engines that respond even as you type, even over the network, even on cell phones; therefore the importance of their efficacy and efficiency cannot be overstated. There are many open possibilities for…

  5. Control of the quantum open system via quantum generalized measurement

    SciTech Connect

    Zhang Ming; Zhu Xiaocai; Li Xingwei; Hu Dewen; Dai Hongyi

    2006-03-15

    For any specified pure state of quantum open system, we can construct a kind of quantum generalized measurement (QGM) that the state of the system after measurement will be deterministically collapsed into the specified pure state from any initial state. In other words, any pure state of quantum open system is reachable by QGM. Subsequently, whether the qubit is density matrix controllable is discussed in the case of pure dephasing. Our results reveal that combining QGM with coherent control will enhance the ability of controlling the quantum open system. Furthermore, it is found that the ability to perform QGM on the quantum open system, combined with the ability of coherence control and conditions of decoherence-free subspace, allows us to suppress quantum decoherence.

  6. Mechanism for quantum speedup in open quantum systems

    NASA Astrophysics Data System (ADS)

    Liu, Hai-Bin; Yang, W. L.; An, Jun-Hong; Xu, Zhen-Yu

    2016-02-01

    The quantum speed limit (QSL) time for open system characterizes the most efficient response of the system to the environmental influences. Previous results showed that the non-Markovianity governs the quantum speedup. Via studying the dynamics of a dissipative two-level system, we reveal that the non-Markovian effect is only the dynamical way of the quantum speedup, while the formation of the system-environment bound states is the essential reason for the quantum speedup. Our attribution of the quantum speedup to the energy-spectrum character can supply another vital path for experiments when the quantum speedup shows up without any dynamical calculations. The potential experimental observation of our quantum speedup mechanism in the circuit QED system is discussed. Our results may be of both theoretical and experimental interest in exploring the ultimate QSL in realistic environments, and may open new perspectives for devising active quantum speedup devices.

  7. Open Quantum Walks: a short introduction

    NASA Astrophysics Data System (ADS)

    Sinayskiy, Ilya; Petruccione, Francesco

    2013-06-01

    The concept of open quantum walks (OQW), quantum walks exclusively driven by the interaction with the external environment, is reviewed. OQWs are formulated as discrete completely positive maps on graphs. The basic properties of OQWs are summarised and new examples of OQWs on Bbb Z and their simulation by means of quantum trajectories are presented.

  8. Repeated interactions in open quantum systems

    SciTech Connect

    Bruneau, Laurent; Joye, Alain; Merkli, Marco

    2014-07-15

    Analyzing the dynamics of open quantum systems has a long history in mathematics and physics. Depending on the system at hand, basic physical phenomena that one would like to explain are, for example, convergence to equilibrium, the dynamics of quantum coherences (decoherence) and quantum correlations (entanglement), or the emergence of heat and particle fluxes in non-equilibrium situations. From the mathematical physics perspective, one of the main challenges is to derive the irreversible dynamics of the open system, starting from a unitary dynamics of the system and its environment. The repeated interactions systems considered in these notes are models of non-equilibrium quantum statistical mechanics. They are relevant in quantum optics, and more generally, serve as a relatively well treatable approximation of a more difficult quantum dynamics. In particular, the repeated interaction models allow to determine the large time (stationary) asymptotics of quantum systems out of equilibrium.

  9. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  10. Quantum Entanglement and Quantum Discord in Gaussian Open Systems

    SciTech Connect

    Isar, Aurelian

    2011-10-03

    In the framework of the theory of open systems based on completely positive quantum dynamical semigroups, we give a description of the continuous-variable quantum entanglement and quantum discord for a system consisting of two noninteracting modes embedded in a thermal environment. Entanglement and discord are used to quantify the quantum correlations of the system. For all values of the temperature of the thermal reservoir, an initial separable Gaussian state remains separable for all times. In the case of an entangled initial Gaussian state, entanglement suppression (entanglement sudden death) takes place for non-zero temperatures of the environment. Only for a zero temperature of the thermal bath the initial entangled state remains entangled for finite times. We analyze the time evolution of the Gaussian quantum discord, which is a measure of all quantum correlations in the bipartite state, including entanglement, and show that quantum discord decays asymptotically in time under the effect of the thermal bath.

  11. Large-scale synthesis of high quality InP quantum dots in a continuous flow-reactor under supercritical conditions.

    PubMed

    Ippen, Christian; Schneider, Benjamin; Pries, Christopher; Kröpke, Stefan; Greco, Tonino; Holländer, Andreas

    2015-02-27

    The synthesis of indium phosphide quantum dots (QDs) in toluene under supercritical conditions was carried out in a macroscopic continuous flow reaction system. The results of first experiments are reported in comparison with analogous reactions in octadecene. The reaction system is described and details are provided about special procedures that are enabled by the continuous flow system for the screening of reaction conditions. The produced QDs show very narrow emission peaks with full width at half maximum down to 45 nm and reasonable photoluminescence quantum yields. The subsequent purification process is facilitated by the ease of removal of toluene, and the productivity of the system is increased by high temperature and high pressure conditions.

  12. Large scale synthesis of graphene quantum dots (GQDs) from waste biomass and their use as an efficient and selective photoluminescence on-off-on probe for Ag(+) ions.

    PubMed

    Suryawanshi, Anil; Biswal, Mandakini; Mhamane, Dattakumar; Gokhale, Rohan; Patil, Shankar; Guin, Debanjan; Ogale, Satishchandra

    2014-10-21

    Graphene quantum dots (GQDs) are synthesized from bio-waste and are further modified to produce amine-terminated GQDs (Am-GQDs) which have higher dispersibility and photoluminescence intensity than those of GQDs. A strong fluorescence quenching of Am-GQDs (switch-off) is observed for a number of metal ions, but only for the Ag(+) ions is the original fluorescence regenerated (switch-on) upon addition of L-cysteine.

  13. Quantum computing Hyper Terahertz Facility opens

    NASA Astrophysics Data System (ADS)

    Singh Chadha, Kulvinder

    2016-01-01

    A new facility has opened at the University of Surrey to use terahertz radiation for quantum computing. The Hyper Terahertz Facility (HTF) is a joint collaboration between the University of Surrey and the National Physical Laboratory (NPL).

  14. Open-System Quantum Annealing in Mean-Field Models with Exponential Degeneracy*

    NASA Astrophysics Data System (ADS)

    Kechedzhi, Kostyantyn; Smelyanskiy, Vadim N.

    2016-04-01

    Real-life quantum computers are inevitably affected by intrinsic noise resulting in dissipative nonunitary dynamics realized by these devices. We consider an open-system quantum annealing algorithm optimized for such a realistic analog quantum device which takes advantage of noise-induced thermalization and relies on incoherent quantum tunneling at finite temperature. We theoretically analyze the performance of this algorithm considering a p -spin model that allows for a mean-field quasiclassical solution and, at the same time, demonstrates the first-order phase transition and exponential degeneracy of states, typical characteristics of spin glasses. We demonstrate that finite-temperature effects introduced by the noise are particularly important for the dynamics in the presence of the exponential degeneracy of metastable states. We determine the optimal regime of the open-system quantum annealing algorithm for this model and find that it can outperform simulated annealing in a range of parameters. Large-scale multiqubit quantum tunneling is instrumental for the quantum speedup in this model, which is possible because of the unusual nonmonotonous temperature dependence of the quantum-tunneling action in this model, where the most efficient transition rate corresponds to zero temperature. This model calculation is the first analytically tractable example where open-system quantum annealing algorithm outperforms simulated annealing, which can, in principle, be realized using an analog quantum computer.

  15. Zeno dynamics in quantum open systems

    PubMed Central

    Zhang, Yu-Ran; Fan, Heng

    2015-01-01

    Quantum Zeno effect shows that frequent observations can slow down or even stop the unitary time evolution of an unstable quantum system. This effect can also be regarded as a physical consequence of the statistical indistinguishability of neighboring quantum states. The accessibility of quantum Zeno dynamics under unitary time evolution can be quantitatively estimated by quantum Zeno time in terms of Fisher information. In this work, we investigate the accessibility of quantum Zeno dynamics in quantum open systems by calculating noisy Fisher information when a trace preserving and completely positive map is assumed. We firstly study the consequences of non-Markovian noise on quantum Zeno effect and give the exact forms of the dissipative Fisher information and the quantum Zeno time. Then, for the operator-sum representation, an achievable upper bound of the quantum Zeno time is given with the help of the results in noisy quantum metrology. It is of significance that the noise reducing the accuracy in the entanglement-enhanced parameter estimation can conversely be favorable for the accessibility of quantum Zeno dynamics of entangled states. PMID:26099840

  16. Zeno dynamics in quantum open systems.

    PubMed

    Zhang, Yu-Ran; Fan, Heng

    2015-06-23

    Quantum Zeno effect shows that frequent observations can slow down or even stop the unitary time evolution of an unstable quantum system. This effect can also be regarded as a physical consequence of the statistical indistinguishability of neighboring quantum states. The accessibility of quantum Zeno dynamics under unitary time evolution can be quantitatively estimated by quantum Zeno time in terms of Fisher information. In this work, we investigate the accessibility of quantum Zeno dynamics in quantum open systems by calculating noisy Fisher information when a trace preserving and completely positive map is assumed. We firstly study the consequences of non-Markovian noise on quantum Zeno effect and give the exact forms of the dissipative Fisher information and the quantum Zeno time. Then, for the operator-sum representation, an achievable upper bound of the quantum Zeno time is given with the help of the results in noisy quantum metrology. It is of significance that the noise reducing the accuracy in the entanglement-enhanced parameter estimation can conversely be favorable for the accessibility of quantum Zeno dynamics of entangled states.

  17. Structural and optical characteristics of graphene quantum dots size-controlled and well-aligned on a large scale by polystyrene-nanosphere lithography

    NASA Astrophysics Data System (ADS)

    Duck Oh, Si; Kim, Jungkil; Lee, Dae Hun; Kim, Ju Hwan; Jang, Chan Wook; Kim, Sung; Choi, Suk-Ho

    2016-01-01

    Graphene quantum dots (GQDs) are one of the most attractive graphene nanostructures due to their potential optoelectronic device applications, but it is a challenge to accurately control the size and arrangement of GQDs. In this report, we fabricate well-aligned GQDs on a large area by polystyrene (PS)-nanosphere (NS) lithography and study their structural and optical properties. Single-layer graphene grown on a Cu foil by chemical vapour deposition is patterned by reactive ion etching employing aligned PS-NS arrays as an etching mask. The size (d) of the GQDs is controlled from 75 to 23 nm by varying the etching time, as proved by scanning electron microscopy and atomic force microscopy. This method is well valid for both rigid/flexible target substrates and even for multilayer graphene formed by piling up single layers. The absorption peak of the GQDs is blue-shifted with respect to that of a graphene sheet, and is sequentially shifted to higher energies by reducing d, consistent with the quantum confinement effect (QCE). The Raman D-to-G band intensity ratio shows an almost monotonic increase with decreasing d, resulting from the dominant contribution of the edge states at the periphery of smaller GQDs. The G-band frequency shows a three-step size-dependence: initial increase, interim saturation, and final decrease with decreasing d, thought to be caused by the competition between the QCE and edge-induced strain effect.

  18. Large scale synthesis of graphene quantum dots (GQDs) from waste biomass and their use as an efficient and selective photoluminescence on-off-on probe for Ag+ ions

    NASA Astrophysics Data System (ADS)

    Suryawanshi, Anil; Biswal, Mandakini; Mhamane, Dattakumar; Gokhale, Rohan; Patil, Shankar; Guin, Debanjan; Ogale, Satishchandra

    2014-09-01

    Graphene quantum dots (GQDs) are synthesized from bio-waste and are further modified to produce amine-terminated GQDs (Am-GQDs) which have higher dispersibility and photoluminescence intensity than those of GQDs. A strong fluorescence quenching of Am-GQDs (switch-off) is observed for a number of metal ions, but only for the Ag+ ions is the original fluorescence regenerated (switch-on) upon addition of l-cysteine.Graphene quantum dots (GQDs) are synthesized from bio-waste and are further modified to produce amine-terminated GQDs (Am-GQDs) which have higher dispersibility and photoluminescence intensity than those of GQDs. A strong fluorescence quenching of Am-GQDs (switch-off) is observed for a number of metal ions, but only for the Ag+ ions is the original fluorescence regenerated (switch-on) upon addition of l-cysteine. Electronic supplementary information (ESI) available: HRTEM images, GQD SAED patterns and EDAX analysis of Am-GQD@Ag. See DOI: 10.1039/c4nr02494j

  19. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  20. Quantum-chemistry based calibration of the alkali metal cation series (Li(+)-Cs(+)) for large-scale polarizable molecular mechanics/dynamics simulations.

    PubMed

    Dudev, Todor; Devereux, Mike; Meuwly, Markus; Lim, Carmay; Piquemal, Jean-Philip; Gresh, Nohad

    2015-02-15

    The alkali metal cations in the series Li(+)-Cs(+) act as major partners in a diversity of biological processes and in bioinorganic chemistry. In this article, we present the results of their calibration in the context of the SIBFA polarizable molecular mechanics/dynamics procedure. It relies on quantum-chemistry (QC) energy-decomposition analyses of their monoligated complexes with representative O-, N-, S-, and Se- ligands, performed with the aug-cc-pVTZ(-f) basis set at the Hartree-Fock level. Close agreement with QC is obtained for each individual contribution, even though the calibration involves only a limited set of cation-specific parameters. This agreement is preserved in tests on polyligated complexes with four and six O- ligands, water and formamide, indicating the transferability of the procedure. Preliminary extensions to density functional theory calculations are reported.

  1. Femtosecond laser ablation of highly oriented pyrolytic graphite: a green route for large-scale production of porous graphene and graphene quantum dots

    NASA Astrophysics Data System (ADS)

    Russo, Paola; Hu, Anming; Compagnini, Giuseppe; Duley, Walter W.; Zhou, Norman Y.

    2014-01-01

    Porous graphene (PG) and graphene quantum dots (GQDs) are attracting attention due to their potential applications in photovoltaics, catalysis, and bio-related fields. We present a novel way for mass production of these promising materials. The femtosecond laser ablation of highly oriented pyrolytic graphite (HOPG) is employed for their synthesis. Porous graphene (PG) layers were found to float at the water-air interface, while graphene quantum dots (GQDs) were dispersed in the solution. The sheets consist of one to six stacked layers of spongy graphene, which form an irregular 3D porous structure that displays pores with an average size of 15-20 nm. Several characterization techniques have confirmed the porous nature of the collected layers. The analyses of the aqueous solution confirmed the presence of GQDs with dimensions of about 2-5 nm. It is found that the formation of both PG and GQDs depends on the fs-laser ablation energy. At laser fluences less than 12 J cm-2, no evidence of either PG or GQDs is detected. However, polyynes with six and eight carbon atoms per chain are found in the solution. For laser energies in the 20-30 J cm-2 range, these polyynes disappeared, while PG and GQDs were found at the water-air interface and in the solution, respectively. The origin of these materials can be explained based on the mechanisms for water breakdown and coal gasification. The absence of PG and GQDs, after the laser ablation of HOPG in liquid nitrogen, confirms the proposed mechanisms.Porous graphene (PG) and graphene quantum dots (GQDs) are attracting attention due to their potential applications in photovoltaics, catalysis, and bio-related fields. We present a novel way for mass production of these promising materials. The femtosecond laser ablation of highly oriented pyrolytic graphite (HOPG) is employed for their synthesis. Porous graphene (PG) layers were found to float at the water-air interface, while graphene quantum dots (GQDs) were dispersed in the

  2. Femtosecond laser ablation of highly oriented pyrolytic graphite: a green route for large-scale production of porous graphene and graphene quantum dots.

    PubMed

    Russo, Paola; Hu, Anming; Compagnini, Giuseppe; Duley, Walter W; Zhou, Norman Y

    2014-02-21

    Porous graphene (PG) and graphene quantum dots (GQDs) are attracting attention due to their potential applications in photovoltaics, catalysis, and bio-related fields. We present a novel way for mass production of these promising materials. The femtosecond laser ablation of highly oriented pyrolytic graphite (HOPG) is employed for their synthesis. Porous graphene (PG) layers were found to float at the water-air interface, while graphene quantum dots (GQDs) were dispersed in the solution. The sheets consist of one to six stacked layers of spongy graphene, which form an irregular 3D porous structure that displays pores with an average size of 15-20 nm. Several characterization techniques have confirmed the porous nature of the collected layers. The analyses of the aqueous solution confirmed the presence of GQDs with dimensions of about 2-5 nm. It is found that the formation of both PG and GQDs depends on the fs-laser ablation energy. At laser fluences less than 12 J cm(-2), no evidence of either PG or GQDs is detected. However, polyynes with six and eight carbon atoms per chain are found in the solution. For laser energies in the 20-30 J cm(-2) range, these polyynes disappeared, while PG and GQDs were found at the water-air interface and in the solution, respectively. The origin of these materials can be explained based on the mechanisms for water breakdown and coal gasification. The absence of PG and GQDs, after the laser ablation of HOPG in liquid nitrogen, confirms the proposed mechanisms.

  3. Femtosecond laser ablation of highly oriented pyrolytic graphite: a green route for large-scale production of porous graphene and graphene quantum dots.

    PubMed

    Russo, Paola; Hu, Anming; Compagnini, Giuseppe; Duley, Walter W; Zhou, Norman Y

    2014-02-21

    Porous graphene (PG) and graphene quantum dots (GQDs) are attracting attention due to their potential applications in photovoltaics, catalysis, and bio-related fields. We present a novel way for mass production of these promising materials. The femtosecond laser ablation of highly oriented pyrolytic graphite (HOPG) is employed for their synthesis. Porous graphene (PG) layers were found to float at the water-air interface, while graphene quantum dots (GQDs) were dispersed in the solution. The sheets consist of one to six stacked layers of spongy graphene, which form an irregular 3D porous structure that displays pores with an average size of 15-20 nm. Several characterization techniques have confirmed the porous nature of the collected layers. The analyses of the aqueous solution confirmed the presence of GQDs with dimensions of about 2-5 nm. It is found that the formation of both PG and GQDs depends on the fs-laser ablation energy. At laser fluences less than 12 J cm(-2), no evidence of either PG or GQDs is detected. However, polyynes with six and eight carbon atoms per chain are found in the solution. For laser energies in the 20-30 J cm(-2) range, these polyynes disappeared, while PG and GQDs were found at the water-air interface and in the solution, respectively. The origin of these materials can be explained based on the mechanisms for water breakdown and coal gasification. The absence of PG and GQDs, after the laser ablation of HOPG in liquid nitrogen, confirms the proposed mechanisms. PMID:24435549

  4. Adiabatic Quantum Search in Open Systems

    NASA Astrophysics Data System (ADS)

    Wild, Dominik S.; Gopalakrishnan, Sarang; Knap, Michael; Yao, Norman Y.; Lukin, Mikhail D.

    2016-10-01

    Adiabatic quantum algorithms represent a promising approach to universal quantum computation. In isolated systems, a key limitation to such algorithms is the presence of avoided level crossings, where gaps become extremely small. In open quantum systems, the fundamental robustness of adiabatic algorithms remains unresolved. Here, we study the dynamics near an avoided level crossing associated with the adiabatic quantum search algorithm, when the system is coupled to a generic environment. At zero temperature, we find that the algorithm remains scalable provided the noise spectral density of the environment decays sufficiently fast at low frequencies. By contrast, higher order scattering processes render the algorithm inefficient at any finite temperature regardless of the spectral density, implying that no quantum speedup can be achieved. Extensions and implications for other adiabatic quantum algorithms will be discussed.

  5. Quantum Simulation for Open-System Dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Dong-Sheng; de Oliveira, Marcos Cesar; Berry, Dominic; Sanders, Barry

    2013-03-01

    Simulations are essential for predicting and explaining properties of physical and mathematical systems yet so far have been restricted to classical and closed quantum systems. Although forays have been made into open-system quantum simulation, the strict algorithmic aspect has not been explored yet is necessary to account fully for resource consumption to deliver bounded-error answers to computational questions. An open-system quantum simulator would encompass classical and closed-system simulation and also solve outstanding problems concerning, e.g. dynamical phase transitions in non-equilibrium systems, establishing long-range order via dissipation, verifying the simulatability of open-system dynamics on a quantum Turing machine. We construct an efficient autonomous algorithm for designing an efficient quantum circuit to simulate many-body open-system dynamics described by a local Hamiltonian plus decoherence due to separate baths for each particle. The execution time and number of gates for the quantum simulator both scale polynomially with the system size. DSW funded by USARO. MCO funded by AITF and Brazilian agencies CNPq and FAPESP through Instituto Nacional de Ciencia e Tecnologia-Informacao Quantica (INCT-IQ). DWB funded by ARC Future Fellowship (FT100100761). BCS funded by AITF, CIFAR, NSERC and USARO.

  6. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  7. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.

  8. Metazoan meiofauna in deep-sea canyons and adjacent open slopes: A large-scale comparison with focus on the rare taxa

    NASA Astrophysics Data System (ADS)

    Bianchelli, S.; Gambi, C.; Zeppilli, D.; Danovaro, R.

    2010-03-01

    Metazoan meiofaunal abundance, total biomass, nematode size and the richness of taxa were investigated along bathymetric gradients (from the shelf break down to ca. 5000-m depth) in six submarine canyons and on five adjacent open slopes of three deep-sea regions. The investigated areas were distributed along >2500 km, on the Portuguese to the Catalan and South Adriatic margins. The Portuguese and Catalan margins displayed the highest abundances, biomass and richness of taxa, while the lowest values were observed in the Central Mediterranean Sea. The comparison between canyons and the nearby open slopes showed the lack of significant differences in terms of meiofaunal abundance and biomass at any sampling depth. In most canyons and on most slopes, meiofaunal variables did not display consistent bathymetric patterns. Conversely, we found that the different topographic features were apparently responsible for significant differences in the abundance and distribution of the rare meiofaunal taxa (i.e. taxa accounting for <1% of total meiofaunal abundance). Several taxa belonging to the temporary meiofauna, such as larvae/juveniles of Priapulida, Holothuroidea, Ascidiacea and Cnidaria, were encountered exclusively on open slopes, while others (including the Tanaidacea and Echinodea larvae) were found exclusively in canyons sediments. Results reported here indicate that, at large spatial scales, differences in deep-sea meiofaunal abundance and biomass are not only controlled by the available food sources, but also by the region or habitat specific topographic features, which apparently play a key role in the distribution of rare benthic taxa.

  9. Quantum state engineering in hybrid open quantum systems

    NASA Astrophysics Data System (ADS)

    Joshi, Chaitanya; Larson, Jonas; Spiller, Timothy P.

    2016-04-01

    We investigate a possibility to generate nonclassical states in light-matter coupled noisy quantum systems, namely, the anisotropic Rabi and Dicke models. In these hybrid quantum systems, a competing influence of coherent internal dynamics and environment-induced dissipation drives the system into nonequilibrium steady states (NESSs). Explicitly, for the anisotropic Rabi model, the steady state is given by an incoherent mixture of two states of opposite parities, but as each parity state displays light-matter entanglement, we also find that the full state is entangled. Furthermore, as a natural extension of the anisotropic Rabi model to an infinite spin subsystem, we next explored the NESS of the anisotropic Dicke model. The NESS of this linearized Dicke model is also an inseparable state of light and matter. With an aim to enrich the dynamics beyond the sustainable entanglement found for the NESS of these hybrid quantum systems, we also propose to combine an all-optical feedback strategy for quantum state protection and for establishing quantum control in these systems. Our present work further elucidates the relevance of such hybrid open quantum systems for potential applications in quantum architectures.

  10. Open quantum systems and error correction

    NASA Astrophysics Data System (ADS)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  11. A study on large scale cultivation of Microcystis aeruginosa under open raceway pond at semi-continuous mode for biodiesel production.

    PubMed

    Ashokkumar, Veeramuthu; Agila, Elango; Salam, Zainal; Ponraj, Mohanadoss; Din, Mohd Fadhil Md; Ani, Farid Nasir

    2014-11-01

    The study explores on upstream and downstream process in Microcystis aeruginosa for biodiesel production. The alga was isolated from temple tank, acclimatized and successfully mass cultivated in open raceway pond at semi-continuous mode. A two step combined process was designed and harvested 99.3% of biomass, the daily dry biomass productivity was recorded up to 28gm(-2)day(-1). The lipid extraction was optimized and achieved 21.3%; physicochemical properties were characterized and found 11.7% of FFA, iodine value 72% and 99.2% of ester content. The lipid was transesterified by a two step simultaneous process and produced 90.1% of biodiesel; the calorific value of the biodiesel was 38.8MJ/kg. Further, the physicochemical properties of biodiesel was characterized and found to be within the limits of American ASTM D6751. Based on the areal and volumetric biomass productivity estimation, M. aeruginosa can yield 84.1 tons of dry biomass ha(-1)year(-1).

  12. A study on large scale cultivation of Microcystis aeruginosa under open raceway pond at semi-continuous mode for biodiesel production.

    PubMed

    Ashokkumar, Veeramuthu; Agila, Elango; Salam, Zainal; Ponraj, Mohanadoss; Din, Mohd Fadhil Md; Ani, Farid Nasir

    2014-11-01

    The study explores on upstream and downstream process in Microcystis aeruginosa for biodiesel production. The alga was isolated from temple tank, acclimatized and successfully mass cultivated in open raceway pond at semi-continuous mode. A two step combined process was designed and harvested 99.3% of biomass, the daily dry biomass productivity was recorded up to 28gm(-2)day(-1). The lipid extraction was optimized and achieved 21.3%; physicochemical properties were characterized and found 11.7% of FFA, iodine value 72% and 99.2% of ester content. The lipid was transesterified by a two step simultaneous process and produced 90.1% of biodiesel; the calorific value of the biodiesel was 38.8MJ/kg. Further, the physicochemical properties of biodiesel was characterized and found to be within the limits of American ASTM D6751. Based on the areal and volumetric biomass productivity estimation, M. aeruginosa can yield 84.1 tons of dry biomass ha(-1)year(-1). PMID:25262427

  13. An integrated pipeline of open source software adapted for multi-CPU architectures: use in the large-scale identification of single nucleotide polymorphisms.

    PubMed

    Jayashree, B; Hanspal, Manindra S; Srinivasan, Rajgopal; Vigneshwaran, R; Varshney, Rajeev K; Spurthi, N; Eshwar, K; Ramesh, N; Chandra, S; Hoisington, David A

    2007-01-01

    The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs). In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.

  14. Quantum localization in open chaotic systems.

    PubMed

    Ryu, Jung-Wan; Hur, G; Kim, Sang Wook

    2008-09-01

    We study a quasibound state of a delta -kicked rotor with absorbing boundaries focusing on the nature of the dynamical localization in open quantum systems. The localization lengths xi of lossy quasibound states located near the absorbing boundaries decrease as they approach the boundary while the corresponding decay rates Gamma are dramatically enhanced. We find the relation xi approximately Gamma(-1/2) and explain it based upon the finite time diffusion, which can also be applied to a random unitary operator model. We conjecture that this idea is valid for the system exhibiting both the diffusion in classical dynamics and the exponential localization in quantum mechanics.

  15. Semiclassical wave functions for open quantum billiards.

    PubMed

    Lackner, Fabian; Březinová, Iva; Burgdörfer, Joachim; Libisch, Florian

    2013-08-01

    We present a semiclassical approximation to the scattering wave function Ψ(r,k) for an open quantum billiard, which is based on the reconstruction of the Feynman path integral. We demonstrate its remarkable numerical accuracy for the open rectangular billiard and show that the convergence of the semiclassical wave function to the full quantum state is controlled by the mean path length or equivalently the dwell time for a given scattering state. In the numerical implementation a cutoff length in the maximum path length or, equivalently, a maximum dwell time τ(max) included implies a finite energy resolution ΔE~τ(max)(-1). Possible applications include leaky billiards and systems with decoherence present. PMID:24032910

  16. Open quantum systems and random matrix theory

    SciTech Connect

    Mulhall, Declan

    2014-10-15

    A simple model for open quantum systems is analyzed with RMT. The system is coupled to the continuum in a minimal way. In this paper we see the effect of opening the system on the level statistics, in particular the level spacing, width distribution and Δ{sub 3}(L) statistic are examined as a function of the strength of this coupling. The usual super-radiant state is observed, and it is seen that as it is formed, the level spacing and Δ{sub 3}(L) statistic exhibit the signatures of missed levels.

  17. Displacement of large-scale open solar magnetic fields from the zone of active longitudes and the heliospheric storm of November 3-10, 2004: 2. "Explosion" of singularity and dynamics of sunspot formation and energy release

    NASA Astrophysics Data System (ADS)

    Ivanov, K. G.

    2010-12-01

    A more detailed scenario of one stage (August-November 2004) of the quasibiennial MHD process "Origination ... and dissipation of the four-sector structure of the solar magnetic field" during the decline phase of cycle 23 has been constructed. It has been indicated that the following working hypothesis on the propagation of an MHD disturbance westward (in the direction of solar rotation) and eastward (toward the zone of active longitudes) with the displacement of the large-scale open solar magnetic field (LOSMF) from this zone can be constructed based on LOSMF model representations and data on sunspot formation, flares, active filaments, and coronal ejections as well as on the estimated contribution of sporadic energy release to the flare luminosity and kinetic energy of ejections: (1) The "explosion" of the LOSMF singularity and the formation in the explosion zone of an anemone active region (AR), which produced the satellite sunspot formation that continued west and east of the "anemone," represented a powerful and energy-intensive source of MHD processes at this stage. (2) This resulted in the origination of two "governing" large-scale MHD processes, which regulated various usual manifestations of solar activity: the fast LOSMF along the neutral line in the solar atmosphere, strongly affecting the zone of active longitudes, and the slow LOSMF in the outer layers of the convection zone. The fronts of these processes were identified by powerful (about 1031 erg) coronal ejections. (3) The collision of a wave reflected from the zone of active longitudes with the eastern front of the hydromagnetic impulse of the convection zone resulted in an increase in LOSMF magnetic fluxes, origination of an active sector boundary in the zone of active longitudes, shear-convergent motions, and generation and destabilization of the flare-productive AR 10696 responsible for the heliospheric storm of November 3-10, 2004.

  18. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  19. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  20. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  1. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  2. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  3. Evolution of Quantum Entanglement in Open Systems

    SciTech Connect

    Isar, A.

    2010-08-04

    In the framework of the theory of open systems based on completely positive quantum dynamical semigroups, we give a description of the continuous-variable entanglement for a system consisting of two uncoupled harmonic oscillators interacting with a thermal environment. Using Peres-Simon necessary sufficient criterion for separability of two-mode Gaussian states, we show that for some values of diffusion coefficient, dissipation constant and temperature of the environment, the state keeps for all times its initial type: separable or entangled. In other cases, entanglement generation, entanglement sudden death or a periodic collapse revival of entanglement take place.

  4. Colloquium: Non-Markovian dynamics in open quantum systems

    NASA Astrophysics Data System (ADS)

    Breuer, Heinz-Peter; Laine, Elsi-Mari; Piilo, Jyrki; Vacchini, Bassano

    2016-04-01

    The dynamical behavior of open quantum systems plays a key role in many applications of quantum mechanics, examples ranging from fundamental problems, such as the environment-induced decay of quantum coherence and relaxation in many-body systems, to applications in condensed matter theory, quantum transport, quantum chemistry, and quantum information. In close analogy to a classical Markovian stochastic process, the interaction of an open quantum system with a noisy environment is often modeled phenomenologically by means of a dynamical semigroup with a corresponding time-independent generator in Lindblad form, which describes a memoryless dynamics of the open system typically leading to an irreversible loss of characteristic quantum features. However, in many applications open systems exhibit pronounced memory effects and a revival of genuine quantum properties such as quantum coherence, correlations, and entanglement. Here recent theoretical results on the rich non-Markovian quantum dynamics of open systems are discussed, paying particular attention to the rigorous mathematical definition, to the physical interpretation and classification, as well as to the quantification of quantum memory effects. The general theory is illustrated by a series of physical examples. The analysis reveals that memory effects of the open system dynamics reflect characteristic features of the environment which opens a new perspective for applications, namely, to exploit a small open system as a quantum probe signifying nontrivial features of the environment it is interacting with. This Colloquium further explores the various physical sources of non-Markovian quantum dynamics, such as structured environmental spectral densities, nonlocal correlations between environmental degrees of freedom, and correlations in the initial system-environment state, in addition to developing schemes for their local detection. Recent experiments addressing the detection, quantification, and control of

  5. Quantum arrival time for open systems

    SciTech Connect

    Yearsley, J. M.

    2010-07-15

    We extend previous work on the arrival time problem in quantum mechanics, in the framework of decoherent histories, to the case of a particle coupled to an environment. The usual arrival time probabilities are related to the probability current, so we explore the properties of the current for general open systems that can be written in terms of a master equation of the Lindblad form. We specialize to the case of quantum Brownian motion, and show that after a time of order the localization time of the current becomes positive. We show that the arrival time probabilities can then be written in terms of a positive operator-valued measure (POVM), which we compute. We perform a decoherent histories analysis including the effects of the environment and show that time-of-arrival probabilities are decoherent for a generic state after a time much greater than the localization time, but that there is a fundamental limitation on the accuracy {delta}t, with which they can be specified which obeys E{delta}t>>({h_bar}/2{pi}). We confirm that the arrival time probabilities computed in this way agree with those computed via the current, provided there is decoherence. We thus find that the decoherent histories formulation of quantum mechanics provides a consistent explanation for the emergence of the probability current as the classical arrival time distribution, and a systematic rule for deciding when probabilities may be assigned.

  6. Experiences from Participants in Large-Scale Group Practice of the Maharishi Transcendental Meditation and TM-Sidhi Programs and Parallel Principles of Quantum Theory, Astrophysics, Quantum Cosmology, and String Theory: Interdisciplinary Qualitative Correspondences

    NASA Astrophysics Data System (ADS)

    Svenson, Eric Johan

    Participants on the Invincible America Assembly in Fairfield, Iowa, and neighboring Maharishi Vedic City, Iowa, practicing Maharishi Transcendental Meditation(TM) (TM) and the TM-Sidhi(TM) programs in large groups, submitted written experiences that they had had during, and in some cases shortly after, their daily practice of the TM and TM-Sidhi programs. Participants were instructed to include in their written experiences only what they observed and to leave out interpretation and analysis. These experiences were then read by the author and compared with principles and phenomena of modern physics, particularly with quantum theory, astrophysics, quantum cosmology, and string theory as well as defining characteristics of higher states of consciousness as described by Maharishi Vedic Science. In all cases, particular principles or phenomena of physics and qualities of higher states of consciousness appeared qualitatively quite similar to the content of the given experience. These experiences are presented in an Appendix, in which the corresponding principles and phenomena of physics are also presented. These physics "commentaries" on the experiences were written largely in layman's terms, without equations, and, in nearly every case, with clear reference to the corresponding sections of the experiences to which a given principle appears to relate. An abundance of similarities were apparent between the subjective experiences during meditation and principles of modern physics. A theoretic framework for understanding these rich similarities may begin with Maharishi's theory of higher states of consciousness provided herein. We conclude that the consistency and richness of detail found in these abundant similarities warrants the further pursuit and development of such a framework.

  7. Large scale molecular simulations of nanotoxicity.

    PubMed

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells.

  8. Quantum game theory and open access publishing

    NASA Astrophysics Data System (ADS)

    Hanauske, Matthias; Bernius, Steffen; Dugall, Berndt

    2007-08-01

    The digital revolution of the information age and in particular the sweeping changes of scientific communication brought about by computing and novel communication technology, potentiate global, high grade scientific information for free. The arXiv, for example, is the leading scientific communication platform, mainly for mathematics and physics, where everyone in the world has free access on. While in some scientific disciplines the open access way is successfully realized, other disciplines (e.g. humanities and social sciences) dwell on the traditional path, even though many scientists belonging to these communities approve the open access principle. In this paper we try to explain these different publication patterns by using a game theoretical approach. Based on the assumption, that the main goal of scientists is the maximization of their reputation, we model different possible game settings, namely a zero sum game, the prisoners’ dilemma case and a version of the stag hunt game, that show the dilemma of scientists belonging to “non-open access communities”. From an individual perspective, they have no incentive to deviate from the Nash equilibrium of traditional publishing. By extending the model using the quantum game theory approach it can be shown, that if the strength of entanglement exceeds a certain value, the scientists will overcome the dilemma and terminate to publish only traditionally in all three settings.

  9. Fast coherent manipulation of quantum states in open systems.

    PubMed

    Song, Jie; Zhang, Zi-Jing; Xia, Yan; Sun, Xiu-Dong; Jiang, Yong-Yuan

    2016-09-19

    We present a method to manipulate quantum states in open systems. It is shown that a high-fidelity quantum state may be generated by designing an additional Hamiltonian without rotating wave approximation. Moreover, we find that a coherent transfer is possible using quantum feedback control even when feedback parameters and noise strength can not be exactly controlled. Our results demonstrate the feasibility of constructing the shortcuts to adiabatic passage beyond rotating wave approximation in open systems. PMID:27661905

  10. Identification of open quantum systems from observable time traces

    DOE PAGES

    Zhang, Jun; Sarovar, Mohan

    2015-05-27

    Estimating the parameters that dictate the dynamics of a quantum system is an important task for quantum information processing and quantum metrology, as well as fundamental physics. In our paper we develop a method for parameter estimation for Markovian open quantum systems using a temporal record of measurements on the system. Furthermore, the method is based on system realization theory and is a generalization of our previous work on identification of Hamiltonian parameters.

  11. A stochastic approach to open quantum systems.

    PubMed

    Biele, R; D'Agosta, R

    2012-07-11

    Stochastic methods are ubiquitous to a variety of fields, ranging from physics to economics and mathematics. In many cases, in the investigation of natural processes, stochasticity arises every time one considers the dynamics of a system in contact with a somewhat bigger system, an environment with which it is considered in thermal equilibrium. Any small fluctuation of the environment has some random effect on the system. In physics, stochastic methods have been applied to the investigation of phase transitions, thermal and electrical noise, thermal relaxation, quantum information, Brownian motion and so on. In this review, we will focus on the so-called stochastic Schrödinger equation. This is useful as a starting point to investigate the dynamics of open quantum systems capable of exchanging energy and momentum with an external environment. We discuss in some detail the general derivation of a stochastic Schrödinger equation and some of its recent applications to spin thermal transport, thermal relaxation, and Bose-Einstein condensation. We thoroughly discuss the advantages of this formalism with respect to the more common approach in terms of the reduced density matrix. The applications discussed here constitute only a few examples of a much wider range of applicability.

  12. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  13. Spectroscopic studies in open quantum systems

    PubMed

    Rotter; Persson; Pichugin; Seba

    2000-07-01

    The Hamiltonian H of an open quantum system is non-Hermitian. Its complex eigenvalues E(R) are the poles of the S matrix and provide both the energies and widths of the states. We illustrate the interplay between Re(H) and Im(H) by means of the different interference phenomena between two neighboring resonance states. Level repulsion may occur along the real or imaginary axis (the latter is called resonance trapping). In any case, the eigenvalues of the two states avoid crossing in the complex plane. We then calculate the poles of the S matrix and the corresponding wave functions for a rectangular microwave resonator with a scatter as a function of the area of the resonator as well as of the degree of opening to a waveguide. The calculations are performed by using the method of exterior complex scaling. Re(H) and Im(H) cause changes in the structure of the wave functions which are permanent, as a rule. The resonance picture obtained from the microwave resonator shows all the characteristic features known from the study of many-body systems in spite of the absence of two-body forces. The effects arising from the interplay between resonance trapping and level repulsion along the real axis are not involved in the statistical theory (random matrix theory).

  14. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  15. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  16. Is the universe homogeneous on large scale?

    NASA Astrophysics Data System (ADS)

    Zhu, Xingfen; Chu, Yaoquan

    Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.

  17. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  18. Reduced Operator Approximation for Modelling Open Quantum Systems

    NASA Astrophysics Data System (ADS)

    Werpachowska, A.

    2015-06-01

    We present the reduced operator approximation: a simple, physically transparent and computationally efficient method of modelling open quantum systems. It employs the Heisenberg picture of the quantum dynamics, which allows us to focus on the system degrees of freedom in a natural and easy way. We describe different variants of the method, low- and high-order in the system-bath interaction operators, defining them for either general quantum harmonic oscillator baths or specialising them for independent baths with Lorentzian spectral densities. Its wide applicability is demonstrated on the examples of systems coupled to different baths (with varying system-bath interaction strength and bath memory length), and compared with the exact pseudomode and the popular quantum state diffusion approach. The method captures the decoherence of the system interacting with the bath, while conserving the total energy. Our results suggest that quantum coherence effects persist in open quantum systems for much longer times than previously thought.

  19. Dynamics of open bosonic quantum systems in coherent state representation

    SciTech Connect

    Dalvit, D. A. R.; Berman, G. P.; Vishik, M.

    2006-01-15

    We consider the problem of decoherence and relaxation of open bosonic quantum systems from a perspective alternative to the standard master equation or quantum trajectories approaches. Our method is based on the dynamics of expectation values of observables evaluated in a coherent state representation. We examine a model of a quantum nonlinear oscillator with a density-density interaction with a collection of environmental oscillators at finite temperature. We derive the exact solution for dynamics of observables and demonstrate a consistent perturbation approach.

  20. Quantum Information Biology: From Theory of Open Quantum Systems to Adaptive Dynamics

    NASA Astrophysics Data System (ADS)

    Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro

    This chapter reviews quantum(-like) information biology (QIB). Here biology is treated widely as even covering cognition and its derivatives: psychology and decision making, sociology, and behavioral economics and finances. QIB provides an integrative description of information processing by bio-systems at all scales of life: from proteins and cells to cognition, ecological and social systems. Mathematically QIB is based on the theory of adaptive quantum systems (which covers also open quantum systems). Ideologically QIB is based on the quantum-like (QL) paradigm: complex bio-systems process information in accordance with the laws of quantum information and probability. This paradigm is supported by plenty of statistical bio-data collected at all bio-scales. QIB re ects the two fundamental principles: a) adaptivity; and, b) openness (bio-systems are fundamentally open). In addition, quantum adaptive dynamics provides the most generally possible mathematical representation of these principles.

  1. Investigating non-Markovian dynamics of quantum open systems

    NASA Astrophysics Data System (ADS)

    Chen, Yusui

    Quantum open system coupled to a non-Markovian environment has recently attracted widespread interest for its important applications in quantum information processing and quantum dissipative systems. New phenomena induced by the non-Markovian environment have been discovered in variety of research areas ranging from quantum optics, quantum decoherence to condensed matter physics. However, the study of the non-Markovian quantum open system is known a difficult problem due to its technical complexity in deriving the fundamental equation of motion and elusive conceptual issues involving non-equilibrium dynamics for a strong coupled environment. The main purpose of this thesis is to introduce several new techniques of solving the quantum open systems including a systematic approach to dealing with non-Markovian master equations from a generic quantum-state diffusion (QSD) equation. In the first part of this thesis, we briefly introduce the non-Markovian quantum-state diffusion approach, and illustrate some pronounced non-Markovian quantum effects through numerical investigation on a cavity-QED model. Then we extend the non-Markovian QSD theory to an interesting model where the environment has a hierarchical structure, and find out the exact non-Markovian QSD equation of this model system. We observe the generation of quantum entanglement due to the interplay between the non-Markovian environment and the cavity. In the second part, we show an innovative method to obtain the exact non-Markovian master equations for a set of generic quantum open systems based on the corresponding non-Markovian QSD equations. Multiple-qubit systems and multilevel systems are discussed in details as two typical examples. Particularly, we derive the exact master equation for a model consisting of a three-level atom coupled to an optical cavity and controlled by an external laser field. Additionally, we discuss in more general context the mathematical similarity between the multiple

  2. A Cloud Computing Platform for Large-Scale Forensic Computing

    NASA Astrophysics Data System (ADS)

    Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico

    The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.

  3. Detecting quantum speedup in closed and open systems

    NASA Astrophysics Data System (ADS)

    Xu, Zhen-Yu

    2016-07-01

    We construct a general measure for detecting the quantum speedup in both closed and open systems. The speed measure is based on the changing rate of the position of quantum states on a manifold with appropriate monotone Riemannian metrics. Any increase in speed is a clear signature of dynamical speedup. To clarify the mechanisms for quantum speedup, we first introduce the concept of longitudinal and transverse types of speedup: the former stems from the time evolution process itself with fixed initial conditions, while the latter is a result of adjusting initial conditions. We then apply the proposed measure to several typical closed and open quantum systems, illustrating that quantum coherence (or entanglement) and the memory effect of the environment together can become resources for longitudinally or transversely accelerating dynamical evolution under specific conditions and assumptions.

  4. Large-scale motions in the universe

    SciTech Connect

    Rubin, V.C.; Coyne, G.V.

    1988-01-01

    The present conference on the large-scale motions of the universe discusses topics on the problems of two-dimensional and three-dimensional structures, large-scale velocity fields, the motion of the local group, small-scale microwave fluctuations, ab initio and phenomenological theories, and properties of galaxies at high and low Z. Attention is given to the Pisces-Perseus supercluster, large-scale structure and motion traced by galaxy clusters, distances to galaxies in the field, the origin of the local flow of galaxies, the peculiar velocity field predicted by the distribution of IRAS galaxies, the effects of reionization on microwave background anisotropies, the theoretical implications of cosmological dipoles, and n-body simulations of universe dominated by cold dark matter.

  5. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  6. Large Scale Shape Optimization for Accelerator Cavities

    SciTech Connect

    Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC

    2011-12-06

    We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.

  7. Approximation of reachable sets for coherently controlled open quantum systems: Application to quantum state engineering

    NASA Astrophysics Data System (ADS)

    Li, Jun; Lu, Dawei; Luo, Zhihuang; Laflamme, Raymond; Peng, Xinhua; Du, Jiangfeng

    2016-07-01

    Precisely characterizing and controlling realistic quantum systems under noises is a challenging frontier in quantum sciences and technologies. In developing reliable controls for open quantum systems, one is often confronted with the problem of the lack of knowledge on the system controllability. The purpose of this paper is to give a numerical approach to this problem, that is, to approximately compute the reachable set of states for coherently controlled quantum Markovian systems. The approximation consists of setting both upper and lower bounds for system's reachable region of states. Furthermore, we apply our reachability analysis to the control of the relaxation dynamics of a two-qubit nuclear magnetic resonance spin system. We implement some experimental tasks of quantum state engineering in this open system at a near optimal performance in view of purity: e.g., increasing polarization and preparing pseudopure states. These results demonstrate the usefulness of our theory and show interesting and promising applications of environment-assisted quantum dynamics.

  8. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  9. Periodic Scarred States in Open Quantum Dots as Evidence of Quantum Darwinism

    NASA Astrophysics Data System (ADS)

    Burke, A. M.; Akis, R.; Day, T. E.; Speyer, Gil; Ferry, D. K.; Bennett, B. R.

    2010-04-01

    Scanning gate microscopy (SGM) is used to image scar structures in an open quantum dot, which is created in an InAs quantum well by electron-beam lithography and wet etching. The scanned images demonstrate periodicities in magnetic field that correlate to those found in the conductance fluctuations. Simulations have shown that these magnetic transform images bear a strong resemblance to actual scars found in the dot that replicate through the modes in direct agreement with quantum Darwinism.

  10. Periodic scarred States in open quantum dots as evidence of quantum Darwinism.

    PubMed

    Burke, A M; Akis, R; Day, T E; Speyer, Gil; Ferry, D K; Bennett, B R

    2010-04-30

    Scanning gate microscopy (SGM) is used to image scar structures in an open quantum dot, which is created in an InAs quantum well by electron-beam lithography and wet etching. The scanned images demonstrate periodicities in magnetic field that correlate to those found in the conductance fluctuations. Simulations have shown that these magnetic transform images bear a strong resemblance to actual scars found in the dot that replicate through the modes in direct agreement with quantum Darwinism.

  11. Open quantum dots—probing the quantum to classical transition

    NASA Astrophysics Data System (ADS)

    Ferry, D. K.; Burke, A. M.; Akis, R.; Brunner, R.; Day, T. E.; Meisels, R.; Kuchar, F.; Bird, J. P.; Bennett, B. R.

    2011-04-01

    Quantum dots provide a natural system in which to study both quantum and classical features of transport. As a closed testbed, they provide a natural system with a very rich set of eigenstates. When coupled to the environment through a pair of quantum point contacts, each of which passes several modes, the original quantum environment evolves into a set of decoherent and coherent states, which classically would compose a mixed phase space. The manner of this breakup is governed strongly by Zurek's decoherence theory, and the remaining coherent states possess all the properties of his pointer states. These states are naturally studied via traditional magnetotransport at low temperatures. More recently, we have used scanning gate (conductance) microscopy to probe the nature of the coherent states, and have shown that families of states exist through the spectrum in a manner consistent with quantum Darwinism. In this review, we discuss the nature of the various states, how they are formed, and the signatures that appear in magnetotransport and general conductance studies.

  12. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  13. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  14. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  15. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  16. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  17. Controlling open quantum systems: tools, achievements, and limitations

    NASA Astrophysics Data System (ADS)

    Koch, Christiane P.

    2016-06-01

    The advent of quantum devices, which exploit the two essential elements of quantum physics, coherence and entanglement, has sparked renewed interest in the control of open quantum systems. Successful implementations face the challenge of preserving relevant nonclassical features at the level of device operation. A major obstacle is decoherence, which is caused by interaction with the environment. Optimal control theory is a tool that can be used to identify control strategies in the presence of decoherence. Here we review recent advances in optimal control methodology that allow typical tasks in device operation for open quantum systems to be tackled and discuss examples of relaxation-optimized dynamics. Optimal control theory is also a useful tool to exploit the environment for control. We discuss examples and point out possible future extensions.

  18. Geometric phase for open quantum systems and stochastic unravelings

    SciTech Connect

    Bassi, Angelo; Ippoliti, Emiliano

    2006-06-15

    We analyze the geometric phase for an open quantum system when computed by resorting to a stochastic unraveling of the reduced density matrix (quantum jump approach or stochastic Schroedinger equations). We show that the resulting phase strongly depends on the type of unraveling used for the calculations: as such, this phase is not a geometric object since it depends on nonphysical parameters, which are not related to the path followed by the density matrix during the evolution of the system.

  19. Central limit theorem for reducible and irreducible open quantum walks

    NASA Astrophysics Data System (ADS)

    Sadowski, Przemysław; Pawela, Łukasz

    2016-07-01

    In this work we aim at proving central limit theorems for open quantum walks on {mathbb {Z}}^d. We study the case when there are various classes of vertices in the network. In particular, we investigate two ways of distributing the vertex classes in the network. First, we assign the classes in a regular pattern. Secondly, we assign each vertex a random class with a transition invariant distribution. For each way of distributing vertex classes, we obtain an appropriate central limit theorem, illustrated by numerical examples. These theorems may have application in the study of complex systems in quantum biology and dissipative quantum computation.

  20. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  1. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  2. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  3. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  4. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  5. Large-scale neuromorphic computing systems.

    PubMed

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195

  6. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  7. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  8. Dissipation and entropy production in open quantum systems

    NASA Astrophysics Data System (ADS)

    Majima, H.; Suzuki, A.

    2010-11-01

    A microscopic description of an open system is generally expressed by the Hamiltonian of the form: Htot = Hsys + Henviron + Hsys-environ. We developed a microscopic theory of entropy and derived a general formula, so-called "entropy-Hamiltonian relation" (EHR), that connects the entropy of the system to the interaction Hamiltonian represented by Hsys-environ for a nonequilibrium open quantum system. To derive the EHR formula, we mapped the open quantum system to the representation space of the Liouville-space formulation or thermo field dynamics (TFD), and thus worked on the representation space Script L := Script H otimes , where Script H denotes the ordinary Hilbert space while the tilde Hilbert space conjugates to Script H. We show that the natural transformation (mapping) of nonequilibrium open quantum systems is accomplished within the theoretical structure of TFD. By using the obtained EHR formula, we also derived the equation of motion for the distribution function of the system. We demonstrated that by knowing the microscopic description of the interaction, namely, the specific form of Hsys-environ on the representation space Script L, the EHR formulas enable us to evaluate the entropy of the system and to gain some information about entropy for nonequilibrium open quantum systems.

  9. Periodic thermodynamics of open quantum systems.

    PubMed

    Brandner, Kay; Seifert, Udo

    2016-06-01

    The thermodynamics of quantum systems coupled to periodically modulated heat baths and work reservoirs is developed. By identifying affinities and fluxes, the first and the second law are formulated consistently. In the linear response regime, entropy production becomes a quadratic form in the affinities. Specializing to Lindblad dynamics, we identify the corresponding kinetic coefficients in terms of correlation functions of the unperturbed dynamics. Reciprocity relations follow from symmetries with respect to time reversal. The kinetic coefficients can be split into a classical and a quantum contribution subject to an additional constraint, which follows from a natural detailed balance condition. This constraint implies universal bounds on efficiency and power of quantum heat engines. In particular, we show that Carnot efficiency cannot be reached whenever quantum coherence effects are present, i.e., when the Hamiltonian used for work extraction does not commute with the bare system Hamiltonian. For illustration, we specialize our universal results to a driven two-level system in contact with a heat bath of sinusoidally modulated temperature.

  10. Periodic thermodynamics of open quantum systems

    NASA Astrophysics Data System (ADS)

    Brandner, Kay; Seifert, Udo

    2016-06-01

    The thermodynamics of quantum systems coupled to periodically modulated heat baths and work reservoirs is developed. By identifying affinities and fluxes, the first and the second law are formulated consistently. In the linear response regime, entropy production becomes a quadratic form in the affinities. Specializing to Lindblad dynamics, we identify the corresponding kinetic coefficients in terms of correlation functions of the unperturbed dynamics. Reciprocity relations follow from symmetries with respect to time reversal. The kinetic coefficients can be split into a classical and a quantum contribution subject to an additional constraint, which follows from a natural detailed balance condition. This constraint implies universal bounds on efficiency and power of quantum heat engines. In particular, we show that Carnot efficiency cannot be reached whenever quantum coherence effects are present, i.e., when the Hamiltonian used for work extraction does not commute with the bare system Hamiltonian. For illustration, we specialize our universal results to a driven two-level system in contact with a heat bath of sinusoidally modulated temperature.

  11. Periodic thermodynamics of open quantum systems.

    PubMed

    Brandner, Kay; Seifert, Udo

    2016-06-01

    The thermodynamics of quantum systems coupled to periodically modulated heat baths and work reservoirs is developed. By identifying affinities and fluxes, the first and the second law are formulated consistently. In the linear response regime, entropy production becomes a quadratic form in the affinities. Specializing to Lindblad dynamics, we identify the corresponding kinetic coefficients in terms of correlation functions of the unperturbed dynamics. Reciprocity relations follow from symmetries with respect to time reversal. The kinetic coefficients can be split into a classical and a quantum contribution subject to an additional constraint, which follows from a natural detailed balance condition. This constraint implies universal bounds on efficiency and power of quantum heat engines. In particular, we show that Carnot efficiency cannot be reached whenever quantum coherence effects are present, i.e., when the Hamiltonian used for work extraction does not commute with the bare system Hamiltonian. For illustration, we specialize our universal results to a driven two-level system in contact with a heat bath of sinusoidally modulated temperature. PMID:27415235

  12. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, C.P.; Olden, J.D.; Lytle, D.A.; Melis, T.S.; Schmidt, J.C.; Bray, E.N.; Freeman, Mary C.; Gido, K.B.; Hemphill, N.P.; Kennard, M.J.; McMullen, L.E.; Mims, M.C.; Pyron, M.; Robinson, C.T.; Williams, J.G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems. ?? 2011 by American Institute of Biological Sciences. All rights reserved.

  13. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  14. Parametric representation of open quantum systems and cross-over from quantum to classical environment

    PubMed Central

    Calvani, Dario; Cuccoli, Alessandro; Gidopoulos, Nikitas I.; Verrucchi, Paola

    2013-01-01

    The behavior of most physical systems is affected by their natural surroundings. A quantum system with an environment is referred to as open, and its study varies according to the classical or quantum description adopted for the environment. We propose an approach to open quantum systems that allows us to follow the cross-over from quantum to classical environments; to achieve this, we devise an exact parametric representation of the principal system, based on generalized coherent states for the environment. The method is applied to the Heisenberg star with frustration, where the quantum character of the environment varies with the couplings entering the Hamiltonian H. We find that when the star is in an eigenstate of H, the central spin behaves as if it were in an effective magnetic field, pointing in the direction set by the environmental coherent-state angle variables , and broadened according to their quantum probability distribution. Such distribution is independent of φ, whereas as a function of θ is seen to get narrower as the quantum character of the environment is reduced, collapsing into a Dirac-δ function in the classical limit. In such limit, because φ is left undetermined, the Von Neumann entropy of the central spin remains finite; in fact, it is equal to the entanglement of the original fully quantum model, a result that establishes a relation between this latter quantity and the Berry phase characterizing the dynamics of the central spin in the effective magnetic field. PMID:23572581

  15. Parametric representation of open quantum systems and cross-over from quantum to classical environment.

    PubMed

    Calvani, Dario; Cuccoli, Alessandro; Gidopoulos, Nikitas I; Verrucchi, Paola

    2013-04-23

    The behavior of most physical systems is affected by their natural surroundings. A quantum system with an environment is referred to as open, and its study varies according to the classical or quantum description adopted for the environment. We propose an approach to open quantum systems that allows us to follow the cross-over from quantum to classical environments; to achieve this, we devise an exact parametric representation of the principal system, based on generalized coherent states for the environment. The method is applied to the s = 1/2 Heisenberg star with frustration, where the quantum character of the environment varies with the couplings entering the Hamiltonian H. We find that when the star is in an eigenstate of H, the central spin behaves as if it were in an effective magnetic field, pointing in the direction set by the environmental coherent-state angle variables (θ, ϕ), and broadened according to their quantum probability distribution. Such distribution is independent of ϕ, whereas as a function of θ is seen to get narrower as the quantum character of the environment is reduced, collapsing into a Dirac-δ function in the classical limit. In such limit, because ϕ is left undetermined, the Von Neumann entropy of the central spin remains finite; in fact, it is equal to the entanglement of the original fully quantum model, a result that establishes a relation between this latter quantity and the Berry phase characterizing the dynamics of the central spin in the effective magnetic field.

  16. Turbulent large-scale structure effects on wake meandering

    NASA Astrophysics Data System (ADS)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  17. Eigenvalue problem of the Liouvillian of open quantum systems

    SciTech Connect

    Hatano, Naomichi; Petrosky, Tomio

    2015-03-10

    It is argued that the Liouvillian that appears in the Liouville-von Neumann equation for open quantum systems can have complex eigenvalues. Attention is paid to the question whether the Liouvillian has an eigenvalue that are not given by the difference of the two Hamiltonian eigenvalues.

  18. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  19. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  20. Large-scale dynamics and global warming

    SciTech Connect

    Held, I.M. )

    1993-02-01

    Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing freshwater input at high latitudes; the possibility of greenhouse cooling in the southern oceans; the sensitivity of monsoonal circulations to differential warming of the two hemispheres; the response of midlatitude storms to changing temperature gradients and increasing water vapor in the atmosphere; and the possible importance of positive feedback between the mean winds and eddy-induced heating in the polar stratosphere.

  1. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  2. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  3. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  4. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  5. Large-scale synthesis of Cu2SnS3 and Cu1.8S hierarchical microspheres as efficient counter electrode materials for quantum dot sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Yang, Xia; Wong, Tai-Lun; Lee, Chun-Sing

    2012-09-01

    Exploration of new catalytic semiconductors with novel structures as counter electrode materials is a promising approach to improve performances of quantum dot sensitized solar cells (QDSSCs). In this work, nearly mono-disperse tetragonal Cu2SnS3 (CTS) and rhombohedral Cu1.8S hierarchical microspheres with nanometer-to-micrometer dimensions have been synthesized respectively via a simple solvothermal approach. These microspheres are also demonstrated as efficient counter electrode materials in solar cells using ZnO/ZnSe/CdSe nanocables as photoanode and polysulfide (Sn2-/S2-) solution as electrolyte. While copper sulfide is regarded as one of the most effective counter electrode materials in QDSSCs, we demonstrate the CTS microspheres to show higher electrocatalytic activity for the reduction of polysulfide electrolyte than the Cu1.8S microspheres. This contributes to obvious enhancement of photocurrent density (JSC) and fill factor (FF). Power conversion efficiency (PCE) is significantly enhanced from 0.25% for the cell using a pure FTO (SnO2:F) glass as counter electrode, to 3.65 and 4.06% for the cells using counter electrodes of FTO glasses coated respectively with Cu1.8S and CTS microspheres.Exploration of new catalytic semiconductors with novel structures as counter electrode materials is a promising approach to improve performances of quantum dot sensitized solar cells (QDSSCs). In this work, nearly mono-disperse tetragonal Cu2SnS3 (CTS) and rhombohedral Cu1.8S hierarchical microspheres with nanometer-to-micrometer dimensions have been synthesized respectively via a simple solvothermal approach. These microspheres are also demonstrated as efficient counter electrode materials in solar cells using ZnO/ZnSe/CdSe nanocables as photoanode and polysulfide (Sn2-/S2-) solution as electrolyte. While copper sulfide is regarded as one of the most effective counter electrode materials in QDSSCs, we demonstrate the CTS microspheres to show higher electrocatalytic

  6. Keldysh field theory for driven open quantum systems

    NASA Astrophysics Data System (ADS)

    Sieberer, L. M.; Buchhold, M.; Diehl, S.

    2016-09-01

    Recent experimental developments in diverse areas—ranging from cold atomic gases to light-driven semiconductors to microcavity arrays—move systems into the focus which are located on the interface of quantum optics, many-body physics and statistical mechanics. They share in common that coherent and driven–dissipative quantum dynamics occur on an equal footing, creating genuine non-equilibrium scenarios without immediate counterpart in equilibrium condensed matter physics. This concerns both their non-thermal stationary states and their many-body time evolution. It is a challenge to theory to identify novel instances of universal emergent macroscopic phenomena, which are tied unambiguously and in an observable way to the microscopic drive conditions. In this review, we discuss some recent results in this direction. Moreover, we provide a systematic introduction to the open system Keldysh functional integral approach, which is the proper technical tool to accomplish a merger of quantum optics and many-body physics, and leverages the power of modern quantum field theory to driven open quantum systems.

  7. Keldysh field theory for driven open quantum systems.

    PubMed

    Sieberer, L M; Buchhold, M; Diehl, S

    2016-09-01

    Recent experimental developments in diverse areas-ranging from cold atomic gases to light-driven semiconductors to microcavity arrays-move systems into the focus which are located on the interface of quantum optics, many-body physics and statistical mechanics. They share in common that coherent and driven-dissipative quantum dynamics occur on an equal footing, creating genuine non-equilibrium scenarios without immediate counterpart in equilibrium condensed matter physics. This concerns both their non-thermal stationary states and their many-body time evolution. It is a challenge to theory to identify novel instances of universal emergent macroscopic phenomena, which are tied unambiguously and in an observable way to the microscopic drive conditions. In this review, we discuss some recent results in this direction. Moreover, we provide a systematic introduction to the open system Keldysh functional integral approach, which is the proper technical tool to accomplish a merger of quantum optics and many-body physics, and leverages the power of modern quantum field theory to driven open quantum systems. PMID:27482736

  8. Keldysh field theory for driven open quantum systems

    NASA Astrophysics Data System (ADS)

    Sieberer, L. M.; Buchhold, M.; Diehl, S.

    2016-09-01

    Recent experimental developments in diverse areas—ranging from cold atomic gases to light-driven semiconductors to microcavity arrays—move systems into the focus which are located on the interface of quantum optics, many-body physics and statistical mechanics. They share in common that coherent and driven-dissipative quantum dynamics occur on an equal footing, creating genuine non-equilibrium scenarios without immediate counterpart in equilibrium condensed matter physics. This concerns both their non-thermal stationary states and their many-body time evolution. It is a challenge to theory to identify novel instances of universal emergent macroscopic phenomena, which are tied unambiguously and in an observable way to the microscopic drive conditions. In this review, we discuss some recent results in this direction. Moreover, we provide a systematic introduction to the open system Keldysh functional integral approach, which is the proper technical tool to accomplish a merger of quantum optics and many-body physics, and leverages the power of modern quantum field theory to driven open quantum systems.

  9. Keldysh field theory for driven open quantum systems.

    PubMed

    Sieberer, L M; Buchhold, M; Diehl, S

    2016-09-01

    Recent experimental developments in diverse areas-ranging from cold atomic gases to light-driven semiconductors to microcavity arrays-move systems into the focus which are located on the interface of quantum optics, many-body physics and statistical mechanics. They share in common that coherent and driven-dissipative quantum dynamics occur on an equal footing, creating genuine non-equilibrium scenarios without immediate counterpart in equilibrium condensed matter physics. This concerns both their non-thermal stationary states and their many-body time evolution. It is a challenge to theory to identify novel instances of universal emergent macroscopic phenomena, which are tied unambiguously and in an observable way to the microscopic drive conditions. In this review, we discuss some recent results in this direction. Moreover, we provide a systematic introduction to the open system Keldysh functional integral approach, which is the proper technical tool to accomplish a merger of quantum optics and many-body physics, and leverages the power of modern quantum field theory to driven open quantum systems.

  10. Dynamical and thermodynamical control of Open Quantum Walks

    NASA Astrophysics Data System (ADS)

    Petruccione, Francesco; Sinayskiy, Ilya

    2014-03-01

    Over the last few years dynamical properties and limit distributions of Open Quantum Walks (OQWs), quantum walks driven by dissipation, have been intensely studied [S. Attal et. al. J. Stat. Phys. 147, Issue 4, 832 (2012)]. For some particular cases of OQWs central limit theorems have been proven [S. Attal, N. Guillotin, C. Sabot, ``Central Limit Theorems for Open Quantum Random Walks,'' to appear in Annales Henri Poincaré]. However, only recently the connection between the rich dynamical behavior of OQWs and the corresponding microscopic system-environment models has been established. The microscopic derivation of an OQW as a reduced system dynamics on a 2-nodes graph [I. Sinayskiy, F. Petruccione, Open Syst. Inf. Dyn. 20, 1340007 (2013)] and its generalization to arbitrary graphs allow to explain the dependance of the dynamical behavior of the OQW on the temperature and coupling to the environment. For thermal environments we observe Gaussian behaviour, whereas at zero temperature population trapping and ``soliton''-like behaviour are possible. Physical realizations of OQWs in quantum optical setups will be also presented. This work is based on research supported by the South African Research Chair Initiative of the Department of Science and Technology and National Research Foundation.

  11. Homogeneous Open Quantum Random Walks on a Lattice

    NASA Astrophysics Data System (ADS)

    Carbone, Raffaella; Pautrat, Yan

    2015-09-01

    We study open quantum random walks (OQRWs) for which the underlying graph is a lattice, and the generators of the walk are homogeneous in space. Using the results recently obtained in Carbone and Pautrat (Ann Henri Poincaré, 2015), we study the quantum trajectory associated with the OQRW, which is described by a position process and a state process. We obtain a central limit theorem and a large deviation principle for the position process. We study in detail the case of homogeneous OQRWs on the lattice , with internal space.

  12. Systematic renormalization of the effective theory of Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-05-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  13. Theory of short periodic orbits for partially open quantum maps.

    PubMed

    Carlo, Gabriel G; Benito, R M; Borondo, F

    2016-07-01

    We extend the semiclassical theory of short periodic orbits [M. Novaes et al., Phys. Rev. E 80, 035202(R) (2009)PLEEE81539-375510.1103/PhysRevE.80.035202] to partially open quantum maps, which correspond to classical maps where the trajectories are partially bounced back due to a finite reflectivity R. These maps are representative of a class that has many experimental applications. The open scar functions are conveniently redefined, providing a suitable tool for the investigation of this kind of system. Our theory is applied to the paradigmatic partially open tribaker map. We find that the set of periodic orbits that belongs to the classical repeller of the open map (R=0) is able to support the set of long-lived resonances of the partially open quantum map in a perturbative regime. By including the most relevant trajectories outside of this set, the validity of the approximation is extended to a broad range of R values. Finally, we identify the details of the transition from qualitatively open to qualitatively closed behavior, providing an explanation in terms of short periodic orbits. PMID:27575138

  14. Theory of short periodic orbits for partially open quantum maps.

    PubMed

    Carlo, Gabriel G; Benito, R M; Borondo, F

    2016-07-01

    We extend the semiclassical theory of short periodic orbits [M. Novaes et al., Phys. Rev. E 80, 035202(R) (2009)PLEEE81539-375510.1103/PhysRevE.80.035202] to partially open quantum maps, which correspond to classical maps where the trajectories are partially bounced back due to a finite reflectivity R. These maps are representative of a class that has many experimental applications. The open scar functions are conveniently redefined, providing a suitable tool for the investigation of this kind of system. Our theory is applied to the paradigmatic partially open tribaker map. We find that the set of periodic orbits that belongs to the classical repeller of the open map (R=0) is able to support the set of long-lived resonances of the partially open quantum map in a perturbative regime. By including the most relevant trajectories outside of this set, the validity of the approximation is extended to a broad range of R values. Finally, we identify the details of the transition from qualitatively open to qualitatively closed behavior, providing an explanation in terms of short periodic orbits.

  15. Theory of short periodic orbits for partially open quantum maps

    NASA Astrophysics Data System (ADS)

    Carlo, Gabriel G.; Benito, R. M.; Borondo, F.

    2016-07-01

    We extend the semiclassical theory of short periodic orbits [M. Novaes et al., Phys. Rev. E 80, 035202(R) (2009), 10.1103/PhysRevE.80.035202] to partially open quantum maps, which correspond to classical maps where the trajectories are partially bounced back due to a finite reflectivity R . These maps are representative of a class that has many experimental applications. The open scar functions are conveniently redefined, providing a suitable tool for the investigation of this kind of system. Our theory is applied to the paradigmatic partially open tribaker map. We find that the set of periodic orbits that belongs to the classical repeller of the open map (R =0 ) is able to support the set of long-lived resonances of the partially open quantum map in a perturbative regime. By including the most relevant trajectories outside of this set, the validity of the approximation is extended to a broad range of R values. Finally, we identify the details of the transition from qualitatively open to qualitatively closed behavior, providing an explanation in terms of short periodic orbits.

  16. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  17. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  18. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  19. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  20. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  1. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  2. Large scale structures in transitional pipe flow

    NASA Astrophysics Data System (ADS)

    Hellström, Leo; Ganapathisubramani, Bharathram; Smits, Alexander

    2015-11-01

    We present a dual-plane snapshot POD analysis of transitional pipe flow at a Reynolds number of 3440, based on the pipe diameter. The time-resolved high-speed PIV data were simultaneously acquired in two planes, a cross-stream plane (2D-3C) and a streamwise plane (2D-2C) on the pipe centerline. The two light sheets were orthogonally polarized, allowing particles situated in each plane to be viewed independently. In the snapshot POD analysis, the modal energy is based on the cross-stream plane, while the POD modes are calculated using the dual-plane data. We present results on the emergence and decay of the energetic large scale motions during transition to turbulence, and compare these motions to those observed in fully developed turbulent flow. Supported under ONR Grant N00014-13-1-0174 and ERC Grant No. 277472.

  3. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes

    2005-01-01

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  4. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  5. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  6. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  7. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  8. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  9. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  10. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  11. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  12. Linear Response Theory for Thermally Driven Quantum Open Systems

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Ogata, Y.; Pillet, C.-A.

    2006-05-01

    This note is a continuation of our recent paper [V. Jakšić Y. Ogata, and C.-A. Pillet, The Green-Kubo formula and Onsager reciprocity relations in quantum statistical mechanics. Commun. Math. Phys. in press.] where we have proven the Green-Kubo formula and the Onsager reciprocity relations for heat fluxes in thermally driven quantum open systems. In this note we extend the derivation of the Green-Kubo formula to heat and charge fluxes and discuss some other generalizations of the model and results of [V. Jakšić Y. Ogata and C.-A. Pillet, The Green-Kubo formula and Onsager reciprocity relations in quantum statistical mechanics. Commun. Math. Phys. in press.].

  13. Fluctuations of work in nearly adiabatically driven open quantum systems.

    PubMed

    Suomela, S; Salmilehto, J; Savenko, I G; Ala-Nissila, T; Möttönen, M

    2015-02-01

    We extend the quantum jump method to nearly adiabatically driven open quantum systems in a way that allows for an accurate account of the external driving in the system-environment interaction. Using this framework, we construct the corresponding trajectory-dependent work performed on the system and derive the integral fluctuation theorem and the Jarzynski equality for nearly adiabatic driving. We show that such identities hold as long as the stochastic dynamics and work variable are consistently defined. We numerically study the emerging work statistics for a two-level quantum system and find that the conventional diabatic approximation is unable to capture some prominent features arising from driving, such as the continuity of the probability density of work. Our results reveal the necessity of using accurate expressions for the drive-dressed heat exchange in future experiments probing jump time distributions. PMID:25768477

  14. Large-scale calculations of gas phase thermochemistry: Enthalpy of formation, standard entropy, and heat capacity

    NASA Astrophysics Data System (ADS)

    Ghahremanpour, Mohammad M.; van Maaren, Paul J.; Ditz, Jonas C.; Lindh, Roland; van der Spoel, David

    2016-09-01

    Large scale quantum calculations for molar enthalpy of formation (ΔfH0), standard entropy (S0), and heat capacity (CV) are presented. A large data set may help to evaluate quantum thermochemistry tools in order to uncover possible hidden shortcomings and also to find experimental data that might need to be reinvestigated, indeed we list and annotate approximately 200 problematic thermochemistry measurements. Quantum methods systematically underestimate S0 for flexible molecules in the gas phase if only a single (minimum energy) conformation is taken into account. This problem can be tackled in principle by performing thermochemistry calculations for all stable conformations [Zheng et al., Phys. Chem. Chem. Phys. 13, 10885-10907 (2011)], but this is not practical for large molecules. We observe that the deviation of composite quantum thermochemistry recipes from experimental S0 corresponds roughly to the Boltzmann equation (S = RlnΩ), where R is the gas constant and Ω the number of possible conformations. This allows an empirical correction of the calculated entropy for molecules with multiple conformations. With the correction we find an RMSD from experiment of ≈13 J/mol K for 1273 compounds. This paper also provides predictions of ΔfH0, S0, and CV for well over 700 compounds for which no experimental data could be found in the literature. Finally, in order to facilitate the analysis of thermodynamics properties by others we have implemented a new tool obthermo in the OpenBabel program suite [O'Boyle et al., J. Cheminf. 3, 33 (2011)] including a table of reference atomization energy values for popular thermochemistry methods.

  15. Global food insecurity. Treatment of major food crops with elevated carbon dioxide or ozone under large-scale fully open-air conditions suggests recent models may have overestimated future yields

    PubMed Central

    Long, Stephen P; Ainsworth, Elizabeth A; Leakey, Andrew D.B; Morgan, Patrick B

    2005-01-01

    Predictions of yield for the globe's major grain and legume arable crops suggest that, with a moderate temperature increase, production may increase in the temperate zone, but decline in the tropics. In total, global food supply may show little change. This security comes from inclusion of the direct effect of rising carbon dioxide (CO2) concentration, [CO2], which significantly stimulates yield by decreasing photorespiration in C3 crops and transpiration in all crops. Evidence for a large response to [CO2] is largely based on studies made within chambers at small scales, which would be considered unacceptable for standard agronomic trials of new cultivars or agrochemicals. Yet, predictions of the globe's future food security are based on such inadequate information. Free-Air Concentration Enrichment (FACE) technology now allows investigation of the effects of rising [CO2] and ozone on field crops under fully open-air conditions at an agronomic scale. Experiments with rice, wheat, maize and soybean show smaller increases in yield than anticipated from studies in chambers. Experiments with increased ozone show large yield losses (20%), which are not accounted for in projections of global food security. These findings suggest that current projections of global food security are overoptimistic. The fertilization effect of CO2 is less than that used in many models, while rising ozone will cause large yield losses in the Northern Hemisphere. Unfortunately, FACE studies have been limited in geographical extent and interactive effects of CO2, ozone and temperature have yet to be studied. Without more extensive study of the effects of these changes at an agronomic scale in the open air, our ever-more sophisticated models will continue to have feet of clay. PMID:16433090

  16. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  17. Multi-point observations of large scale perturbations on the open-closed field line boundary during a geomagnetic storm, as observed by the Van Allen Probes and geostationary satellites

    NASA Astrophysics Data System (ADS)

    Grande, Manuel; MacDonald, Elizabeth; Dixon, Patrick

    We discuss a series of lobe entry events observed by the twin Van Allen Probe spacecraft between 0200 and 0515 UTC during the November 14th 2012 geomagnetic storm. During the events Dst was below -100nT with the IMF being strongly southward (Bz = -15nT) and eastward (By = 20 nT). The events occurred in the southern hemisphere flank between 0400 and 0635 local time and at altitudes between 5.6 and 6.2 RE , and were characterized by significantly diminished electron and ion fluxes and a corresponding strong, highly stretched magnetic field. Both spacecraft crossed into the lobe five times with durations from 3-10 minutes. Four of the events were seen by both Van Allen Probes nearly simultaneously despite separations of up to 45 minutes of local time. In all cases the more tailward satellite sees the boundary crossing first. The lobe was also encountered at the same time by the LANL geosynchronous satellites, both at dawn in the northern hemisphere and dusk in the southern hemisphere. These multi-spacecraft observations are used to constrain the spatial and temporal extent of the open/closed field line boundary and to compare this topology to that predicted by a range of magnetic field models. Significant accelerated field aligned oxygen signatures were measured by the HOPE low energy plasma instrument aboard the probes. Using the multi-point measurements we will examine the source of this acceleration and its role in inner magnetosphere ion dynamics.

  18. Multi-point observations of large scale perturbations on the open/closed field line boundary during a geomagnetic storm, as observed by the Van Allen Probes and geostationary satellites

    NASA Astrophysics Data System (ADS)

    Dixon, Paddy; Grande, Manuel; MacDonald, Elizabeth; Skoug, Ruth; Reeves, Geoff; Thomsen, Michelle; Funsten, Herbert; Zou, Shasha; Glocer, Alex; Jia, Xianzhe

    2014-05-01

    We discuss a series of lobe entry events observed by the twin Van Allen Probe spacecraft between 0200 and 0515 UTC during the November 14th 2012 geomagnetic storm. During the events Dst was below -100nT with the IMF being strongly southward (Bz = -15nT) and eastward (By = 20 nT). The events occurred in the southern hemisphere flank between 0400 and 0635 local time and at altitudes between 5.6 and 6.2 RE , and were characterized by significantly diminished electron and ion fluxes and a corresponding strong, highly stretched magnetic field. Both spacecraft crossed into the lobe five times with durations from 3-10 minutes. Four of the events were seen by both Van Allen Probes nearly simultaneously despite separations of up to 45 minutes of local time. In all cases the more tailward satellite sees the boundary crossing first. The lobe was also encountered at the same time by the LANL geosynchronous satellites, both at dawn in the northern hemisphere and dusk in the southern hemisphere. These multi-spacecraft observations are used to constrain the spatial and temporal extent of the open/closed field line boundary and to compare this topology to that predicted by a range of magnetic field models. Significant accelerated field aligned oxygen signatures were measured by the HOPE low energy plasma instrument aboard the probes. Using the multi-point measurements we will examine the source of this acceleration and its role in inner magnetosphere ion dynamics.

  19. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  20. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  1. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  2. Large-scale clustering of cosmic voids

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  3. Simulations of Large Scale Structures in Cosmology

    NASA Astrophysics Data System (ADS)

    Liao, Shihong

    Large-scale structures are powerful probes for cosmology. Due to the long range and non-linear nature of gravity, the formation of cosmological structures is a very complicated problem. The only known viable solution is cosmological N-body simulations. In this thesis, we use cosmological N-body simulations to study structure formation, particularly dark matter haloes' angular momenta and dark matter velocity field. The origin and evolution of angular momenta is an important ingredient for the formation and evolution of haloes and galaxies. We study the time evolution of the empirical angular momentum - mass relation for haloes to offer a more complete picture about its origin, dependences on cosmological models and nonlinear evolutions. We also show that haloes follow a simple universal specific angular momentum profile, which is useful in modelling haloes' angular momenta. The dark matter velocity field will become a powerful cosmological probe in the coming decades. However, theoretical predictions of the velocity field rely on N-body simulations and thus may be affected by numerical artefacts (e.g. finite box size, softening length and initial conditions). We study how such numerical effects affect the predicted pairwise velocities, and we propose a theoretical framework to understand and correct them. Our results will be useful for accurately comparing N-body simulations to observational data of pairwise velocities.

  4. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  5. Backscatter in Large-Scale Flows

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu

    2009-11-01

    Downgradient mixing of potential-voriticity and its variants are commonly employed to model the effects of unresolved geostrophic turbulence on resolved scales. This is motivated by the (inviscid and unforced) particle-wise conservation of potential-vorticity and the mean forward or down-scale cascade of potential enstrophy in geostrophic turubulence. By examining the statistical distribution of the transfer of potential enstrophy from mean or filtered motions to eddy or sub-filter motions, we find that the mean forward cascade results from the forward-scatter being only slightly greater than the backscatter. Downgradient mixing ideas, do not recognize such equitable mean-eddy or large scale-small scale interactions and consequently model only the mean effect of forward cascade; the importance of capturing the effects of backscatter---the forcing of resolved scales by unresolved scales---are only beginning to be recognized. While recent attempts to model the effects of backscatter on resolved scales have taken a stochastic approach, our analysis suggests that these effects are amenable to being modeled deterministically.

  6. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  7. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  8. Open Quantum Systems with Applications to Precision Measurements

    NASA Astrophysics Data System (ADS)

    Tieri, David

    A spectrally pure coherent light source is an important component in precision measurement applications, such as an atomic clock. The more spectrally pure the coherent light source, or the narrower the linewidth of its power spectrum, the better for atomic clock experiments. A coherent light light source, such as a laser, is intrinsically an open quantum system, meaning that it gains and loses energy from an external environment. The aim of this thesis is to study various open quantum systems in an attempt to discover a scheme in which an extremely spectrally pure coherent light source might be realized. Therefore, this thesis begins by introducing the two main approaches to treating open quantum systems, the quantum master equation approach, and the quantum Langevin equation approach. In addition to deriving these from first principles, many of the solution methods to these approaches are given and then demonstrated using computer simulations. These include the quantum jump algorithm, the quantum state diffusion algorithm, the cumulant expansion method, and the method of c-number Langevin equations. Using these methods, the theory of the crossover between lasing and steady state superradiance is presented. It is shown that lasing and steady state superradiance might be demonstrated in the same physical system, but in different parameter regimes. The parameter space between these two extreme limits is explored, and the benefits and drawbacks of operating a system at a given set of parameters, i.e. to achieve the most spectrally pure light source, are discussed. We also consider the phase stability of a laser that is locked to a cavity QED system comprised of atoms with an ultra-narrow optical transition. Although the atomic motion introduces Doppler broadening, the standing wave nature of the cavity causes saturated absorption, which can be used to achieve an extremely high degree of phase stabilization. The inhomogeneity introduced by finite atomic velocities can

  9. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  10. Control landscapes for observable preparation with open quantum systems

    SciTech Connect

    Wu Rebing; Pechen, Alexander; Rabitz, Herschel; Hsieh, Michael; Tsou, Benjamin

    2008-02-15

    A quantum control landscape is defined as the observable as a function(al) of the system control variables. Such landscapes were introduced to provide a basis to understand the increasing number of successful experiments controlling quantum dynamics phenomena. This paper extends the concept to encompass the broader context of the environment having an influence. For the case that the open system dynamics are fully controllable, it is shown that the control landscape for open systems can be lifted to the analysis of an equivalent auxiliary landscape of a closed composite system that contains the environmental interactions. This inherent connection can be analyzed to provide relevant information about the topology of the original open system landscape. Application to the optimization of an observable expectation value reveals the same landscape simplicity observed in former studies on closed systems. In particular, no false suboptimal traps exist in the system control landscape when seeking to optimize an observable, even in the presence of complex environments. Moreover, a quantitative study of the control landscape of a system interacting with a thermal environment shows that the enhanced controllability attainable with open dynamics significantly broadens the range of the achievable observable values over the control landscape.

  11. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  12. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  13. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  14. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  15. Large-scale coherent structures as drivers of combustion instability

    SciTech Connect

    Schadow, K.C.; Gutmark, E.; Parr, T.P.; Parr, D.M.; Wilson, K.J.

    1987-06-01

    The role of flow coherent structures as drivers of combustion instabilities in a dump combustor was studied. Results of nonreacting tests in air and water flows as well as combustion experiments in a diffusion flame and dump combustor are discussed to provide insight into the generation process of large-scale structures in the combustor flow and their interaction with the combustion process. It is shown that the flow structures, or vortices, are formed by interaction between the flow instabilities and the chamber acoustic resonance. When these vortices dominate the reacting flow, the combustion is confined to their cores, leading to periodic heat release, which may result in the driving of high amplitude pressure oscillations. These oscillations are typical to the occurrence of combustion instabilities for certain operating conditions. The basic understanding of the interaction between flow dynamics and the combustion process opens up the possibility for rational control of combustion-induced pressure oscillations. 42 references.

  16. Black hole jets without large-scale net magnetic flux

    NASA Astrophysics Data System (ADS)

    Parfrey, Kyle; Giannios, Dimitrios; Beloborodov, Andrei M.

    2015-01-01

    We propose a scenario for launching relativistic jets from rotating black holes, in which small-scale magnetic flux loops, sustained by disc turbulence, are forced to inflate and open by differential rotation between the black hole and the accretion flow. This mechanism does not require a large-scale net magnetic flux in the accreting plasma. Estimates suggest that the process could operate effectively in many systems, and particularly naturally and efficiently when the accretion flow is retrograde. We present the results of general-relativistic force-free electrodynamic simulations demonstrating the time evolution of the black hole's magnetosphere, the cyclic formation of jets, and the effect of magnetic reconnection. The jets are highly variable on time-scales ˜10-103rg/c, where rg is the black hole's gravitational radius. The reconnecting current sheets observed in the simulations may be responsible for the hard X-ray emission from accreting black holes.

  17. Large scale land use cartography of special areas

    SciTech Connect

    Amico, F.D.; Maccarone, D.; Pandiscia, G.V.

    1996-11-01

    On 06 October 1993 an aerial remote sensing mission has been done on the {open_quote}Mounts of the Sila{close_quotes} area, using a DAEDALUS ATM multispectral scanner, in the framework of the TELAER project, supported by I.A.S.M. (Istituto per l`Assistenza e lo Sviluppo del Mezzogiorno). The study area is inside the National Park of Calabria, well known for its coniferous forests. The collected imagery were used to produce a large scale land use cartography, on the scale of 1 to 5000, extracting information on natural and anthropical vegetation from the multispectral images, with the aid of stereo photos acquired simultaneously. 5 refs., 1 fig., 1 tab.

  18. Boundary driven open quantum many-body systems

    SciTech Connect

    Prosen, Tomaž

    2014-01-08

    In this lecture course I outline a simple paradigm of non-eqjuilibrium quantum statistical physics, namely we shall study quantum lattice systems with local, Hamiltonian (conservative) interactions which are coupled to the environment via incoherent processes only at the system's boundaries. This is arguably the simplest nontrivial context where one can study far from equilibrium steady states and their transport properties. We shall formulate the problem in terms of a many-body Markovian master equation (the so-called Lindblad equation, and some of its extensions, e.g. the Redfield eqaution). The lecture course consists of two main parts: Firstly, and most extensively we shall present canonical Liouville-space many-body formalism, the so-called 'third quantization' and show how it can be implemented to solve bi-linear open many-particle problems, the key peradigmatic examples being the XY spin 1/2 chains or quasi-free bosonic (or harmonic) chains. Secondly, we shall outline several recent approaches on how to approach exactly solvable open quantum interacting many-body problems, such as anisotropic Heisenberg ((XXZ) spin chain or fermionic Hubbard chain.

  19. Large-Scale Graphene Film Deposition for Monolithic Device Fabrication

    NASA Astrophysics Data System (ADS)

    Al-shurman, Khaled

    Since 1958, the concept of integrated circuit (IC) has achieved great technological developments and helped in shrinking electronic devices. Nowadays, an IC consists of more than a million of compacted transistors. The majority of current ICs use silicon as a semiconductor material. According to Moore's law, the number of transistors built-in on a microchip can be double every two years. However, silicon device manufacturing reaches its physical limits. To explain, there is a new trend to shrinking circuitry to seven nanometers where a lot of unknown quantum effects such as tunneling effect can not be controlled. Hence, there is an urgent need for a new platform material to replace Si. Graphene is considered a promising material with enormous potential applications in many electronic and optoelectronics devices due to its superior properties. There are several techniques to produce graphene films. Among these techniques, chemical vapor deposition (CVD) offers a very convenient method to fabricate films for large-scale graphene films. Though CVD method is suitable for large area growth of graphene, the need for transferring a graphene film to silicon-based substrates is required. Furthermore, the graphene films thus achieved are, in fact, not single crystalline. Also, graphene fabrication utilizing Cu and Ni at high growth temperature contaminates the substrate that holds Si CMOS circuitry and CVD chamber as well. So, lowering the deposition temperature is another technological milestone for the successful adoption of graphene in integrated circuits fabrication. In this research, direct large-scale graphene film fabrication on silicon based platform (i.e. SiO2 and Si3N4) at low temperature was achieved. With a focus on low-temperature graphene growth, hot-filament chemical vapor deposition (HF-CVD) was utilized to synthesize graphene film using 200 nm thick nickel film. Raman spectroscopy was utilized to examine graphene formation on the bottom side of the Ni film

  20. Introducing Large-Scale Innovation in Schools

    ERIC Educational Resources Information Center

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-01-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school…

  1. Quantum algorithm for simulating the dynamics of an open quantum system

    SciTech Connect

    Wang Hefeng; Ashhab, S.; Nori, Franco

    2011-06-15

    In the study of open quantum systems, one typically obtains the decoherence dynamics by solving a master equation. The master equation is derived using knowledge of some basic properties of the system, the environment, and their interaction: One basically needs to know the operators through which the system couples to the environment and the spectral density of the environment. For a large system, it could become prohibitively difficult to even write down the appropriate master equation, let alone solve it on a classical computer. In this paper, we present a quantum algorithm for simulating the dynamics of an open quantum system. On a quantum computer, the environment can be simulated using ancilla qubits with properly chosen single-qubit frequencies and with properly designed coupling to the system qubits. The parameters used in the simulation are easily derived from the parameters of the system + environment Hamiltonian. The algorithm is designed to simulate Markovian dynamics, but it can also be used to simulate non-Markovian dynamics provided that this dynamics can be obtained by embedding the system of interest into a larger system that obeys Markovian dynamics. We estimate the resource requirements for the algorithm. In particular, we show that for sufficiently slow decoherence a single ancilla qubit could be sufficient to represent the entire environment, in principle.

  2. Dynamical gauge effects in an open quantum network

    NASA Astrophysics Data System (ADS)

    Zhao, Jianshi; Price, Craig; Liu, Qi; Gemelke, Nathan

    2016-05-01

    We describe new experimental techniques for simulation of high-energy field theories based on an analogy between open thermodynamic systems and effective dynamical gauge-fields following SU(2) × U(1) Yang-Mills models. By coupling near-resonant laser-modes to atoms moving in a disordered optical environment, we create an open system which exhibits a non-equilibrium phase transition between two steady-state behaviors, exhibiting scale-invariant behavior near the transition. By measuring transport of atoms through the disordered network, we observe two distinct scaling behaviors, corresponding to the classical and quantum limits for the dynamical gauge field. This behavior is loosely analogous to dynamical gauge effects in quantum chromodynamics, and can mapped onto generalized open problems in theoretical understanding of quantized non-Abelian gauge theories. Additional, the scaling behavior can be understood from the geometric structure of the gauge potential and linked to the measure of information in the local disordered potential, reflecting an underlying holographic principle. We acknowledge support from NSF Award No.1068570, and the Charles E. Kaufman Foundation.

  3. Scar and antiscar quantum effects in open chaotic systems.

    PubMed

    Kaplan, L

    1999-05-01

    We predict and numerically observe strong periodic orbit effects in the properties of weakly open quantum systems with a chaotic classical limit. Antiscars lead to a large number of exponentially narrow isolated resonances when the single-channel (or tunneling) opening is located on a short unstable orbit of the closed system; the probability to remain in the system at long times is thus exponentially enhanced over the random matrix theory prediction. The distribution of resonance widths and the probability to remain are quantitatively given in terms of only the stability matrix of the orbit on which the opening is placed. The long-time remaining probability density is nontrivially distributed over the available phase space; it can be enhanced or suppressed near orbits other than the one on which the lead is located, depending on the periods and classical actions of these other orbits. These effects of the short periodic orbits on quantum decay rates have no classical counterpart, and first appear on times scales much larger than the Heisenberg time of the system. All the predictions are quantitatively compared with numerical data. PMID:11969492

  4. Continuous Time Open Quantum Random Walks and Non-Markovian Lindblad Master Equations

    NASA Astrophysics Data System (ADS)

    Pellegrini, Clément

    2014-02-01

    A new type of quantum random walks, called Open Quantum Random Walks, has been developed and studied in Attal et al. (Open quantum random walks, preprint) and (Central limit theorems for open quantum random walks, preprint). In this article we present a natural continuous time extension of these Open Quantum Random Walks. This continuous time version is obtained by taking a continuous time limit of the discrete time Open Quantum Random Walks. This approximation procedure is based on some adaptation of Repeated Quantum Interactions Theory (Attal and Pautrat in Annales Henri Poincaré Physique Théorique 7:59-104, 2006) coupled with the use of correlated projectors (Breuer in Phys Rev A 75:022103, 2007). The limit evolutions obtained this way give rise to a particular type of quantum master equations. These equations appeared originally in the non-Markovian generalization of the Lindblad theory (Breuer in Phys Rev A 75:022103, 2007). We also investigate the continuous time limits of the quantum trajectories associated with Open Quantum Random Walks. We show that the limit evolutions in this context are described by jump stochastic differential equations. Finally we present a physical example which can be described in terms of Open Quantum Random Walks and their associated continuous time limits.

  5. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  6. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  7. Designing and Probing Open Quantum Systems: Quantum Annealing, Excitonic Energy Transfer, and Nonlinear Fluorescence Spectroscopy

    NASA Astrophysics Data System (ADS)

    Perdomo, Alejandro

    The 20th century saw the first revolution of quantum mechanics, setting the rules for our understanding of light, matter, and their interaction. The 21st century is focused on using these quantum mechanical laws to develop technologies which allows us to solve challenging practical problems. One of the directions is the use quantum devices which promise to surpass the best computers and best known classical algorithms for solving certain tasks. Crucial to the design of realistic devices and technologies is to account for the open nature of quantum systems and to cope with their interactions with the environment. In the first part of this dissertation, we show how to tackle classical optimization problems of interest in the physical sciences within one of these quantum computing paradigms, known as quantum annealing (QA). We present the largest implementation of QA on a biophysical problem (six different experiments with up to 81 superconducting quantum bits). Although the cases presented here can be solved on a classical computer, we present the first implementation of lattice protein folding on a quantum device under the Miyazawa-Jernigan model. This is the first step towards studying optimization problems in biophysics and statistical mechanics using quantum devices. In the second part of this dissertation, we focus on the problem of excitonic energy transfer. We provide an intuitive platform for engineering exciton transfer dynamics and we show that careful consideration of the properties of the environment leads to opportunities to engineer the transfer of an exciton. Since excitons in nanostructures are proposed for use in quantum information processing and artificial photosynthetic designs, our approach paves the way for engineering a wide range of desired exciton dynamics. Finally, we develop the theory for a two-dimensional electronic spectroscopic technique based on fluorescence (2DFS) and challenge previous theoretical results claiming its equivalence to

  8. No large scale curvature perturbations during the waterfall phase transition of hybrid inflation

    SciTech Connect

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2011-03-15

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of the standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depends crucially on the competition between the classical and the quantum mechanical backreactions to terminate inflation. If one considers only the classical evolution of the system, we show that the highly blue-tilted entropy perturbations induce highly blue-tilted large scale curvature perturbations during the waterfall phase transition which dominate over the original adiabatic curvature perturbations. However, we show that the quantum backreactions of the waterfall field inhomogeneities produced during the phase transition dominate completely over the classical backreactions. The cumulative quantum backreactions of very small scale tachyonic modes terminate inflation very efficiently and shut off the curvature perturbation evolution during the waterfall phase transition. This indicates that the standard hybrid inflation model is safe under large scale curvature perturbations during the waterfall phase transition.

  9. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  10. Dissipation equation of motion approach to open quantum systems

    NASA Astrophysics Data System (ADS)

    Yan, YiJing; Jin, Jinshuang; Xu, Rui-Xue; Zheng, Xiao

    2016-08-01

    This paper presents a comprehensive account of the dissipaton-equation-of-motion (DEOM) theory for open quantum systems. This newly developed theory treats not only the quantum dissipative systems of primary interest, but also the hybrid environment dynamics that are also experimentally measurable. Despite the fact that DEOM recovers the celebrated hierarchical-equations-of-motion (HEOM) formalism, these two approaches have some fundamental differences. To show these differences, we also scrutinize the HEOM construction via its root at the influence functional path integral formalism. We conclude that many unique features of DEOM are beyond the reach of the HEOM framework. The new DEOM approach renders a statistical quasi-particle picture to account for the environment, which can be either bosonic or fermionic. The review covers the DEOM construction, the physical meanings of dynamical variables, the underlying theorems and dissipaton algebra, and recent numerical advancements for efficient DEOM evaluations of various problems. We also address the issue of high-order many-dissipaton truncations with respect to the invariance principle of quantum mechanics of Schrödinger versus Heisenberg prescriptions. DEOM serves as a universal tool for characterizing of stationary and dynamic properties of system-and-bath interferences, as highlighted with its real-time evaluation of both linear and nonlinear current noise spectra of nonequilibrium electronic transport.

  11. Python for large-scale electrophysiology.

    PubMed

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  12. Ectopically tethered CP190 induces large-scale chromatin decondensation

    NASA Astrophysics Data System (ADS)

    Ahanger, Sajad H.; Günther, Katharina; Weth, Oliver; Bartkuhn, Marek; Bhonde, Ramesh R.; Shouche, Yogesh S.; Renkawitz, Rainer

    2014-01-01

    Insulator mediated alteration in higher-order chromatin and/or nucleosome organization is an important aspect of epigenetic gene regulation. Recent studies have suggested a key role for CP190 in such processes. In this study, we analysed the effects of ectopically tethered insulator factors on chromatin structure and found that CP190 induces large-scale decondensation when targeted to a condensed lacO array in mammalian and Drosophila cells. In contrast, dCTCF alone, is unable to cause such a decondensation, however, when CP190 is present, dCTCF recruits it to the lacO array and mediates chromatin unfolding. The CP190 induced opening of chromatin may not be correlated with transcriptional activation, as binding of CP190 does not enhance luciferase activity in reporter assays. We propose that CP190 may mediate histone modification and chromatin remodelling activity to induce an open chromatin state by its direct recruitment or targeting by a DNA binding factor such as dCTCF.

  13. Large scale remote sensing for environmental monitoring of infrastructure.

    PubMed

    Whelan, Matthew J; Fuchs, Michael P; Janoyan, Kerop D

    2008-07-01

    Recent developments in wireless sensor technology afford the opportunity to rapidly and easily deploy large-scale, low-cost, and low-power sensor networks across relatively sizeable environmental regions. Furthermore, the advancement of increasingly smaller and less expensive wireless hardware is further complemented by the rapid development of open-source software components. These software protocols allow for interfacing with the hardware to program and configure the onboard processing and communication settings. In general, a wireless sensor network topology consists of an array of microprocessor boards, referred to as motes, which can engage in two-way communication among each other as well as with a base station that relays the mote data to a host computer. The information can then be either logged and displayed on the local host or directed to an http server for network monitoring remote from the site. A number of wireless sensor products are available that offer off-the-shelf network hardware as well as sensor solutions for environmental monitoring that are compatible with the TinyOS open-source software platform. This paper presents an introduction to wireless sensing and to the use of external antennas for increasing the antenna radiation intensity and shaping signal directivity for monitoring applications requiring larger mote-to-mote communication distances.

  14. Large-scale solvothermal synthesis of fluorescent carbon nanoparticles

    NASA Astrophysics Data System (ADS)

    Ku, Kahoe; Lee, Seung-Wook; Park, Jinwoo; Kim, Nayon; Chung, Haegeun; Han, Chi-Hwan; Kim, Woong

    2014-09-01

    The large-scale production of high-quality carbon nanomaterials is highly desirable for a variety of applications. We demonstrate a novel synthetic route to the production of fluorescent carbon nanoparticles (CNPs) in large quantities via a single-step reaction. The simple heating of a mixture of benzaldehyde, ethanol and graphite oxide (GO) with residual sulfuric acid in an autoclave produced 7 g of CNPs with a quantum yield of 20%. The CNPs can be dispersed in various organic solvents; hence, they are easily incorporated into polymer composites in forms such as nanofibers and thin films. Additionally, we observed that the GO present during the CNP synthesis was reduced. The reduced GO (RGO) was sufficiently conductive (σ ≈ 282 S m-1) such that it could be used as an electrode material in a supercapacitor; in addition, it can provide excellent capacitive behavior and high-rate capability. This work will contribute greatly to the development of efficient synthetic routes to diverse carbon nanomaterials, including CNPs and RGO, that are suitable for a wide range of applications.

  15. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  16. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  17. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  18. Minimal evolution time and quantum speed limit of non-Markovian open systems.

    PubMed

    Meng, Xiangyi; Wu, Chengjun; Guo, Hong

    2015-01-01

    We derive a sharp bound as the quantum speed limit (QSL) for the minimal evolution time of quantum open systems in the non-Markovian strong-coupling regime with initial mixed states by considering the effects of both renormalized Hamiltonian and dissipator. For a non-Markovian quantum open system, the possible evolution time between two arbitrary states is not unique, among the set of which we find that the minimal one and its QSL can decrease more steeply by adjusting the coupling strength of the dissipator, which thus provides potential improvements of efficiency in many quantum physics and quantum information areas.

  19. Explorative Function in Williams Syndrome Analyzed through a Large-Scale Task with Multiple Rewards

    ERIC Educational Resources Information Center

    Foti, F.; Petrosini, L.; Cutuli, D.; Menghini, D.; Chiarotti, F.; Vicari, S.; Mandolesi, L.

    2011-01-01

    This study aimed to evaluate spatial function in subjects with Williams syndrome (WS) by using a large-scale task with multiple rewards and comparing the spatial abilities of WS subjects with those of mental age-matched control children. In the present spatial task, WS participants had to explore an open space to search nine rewards placed in…

  20. Agile in Large-Scale Development Workshop: Coaching, Transitioning and Practicing

    NASA Astrophysics Data System (ADS)

    Nilsson, Thomas; Larsson, Andreas

    Agile in large-scale and complex development presents its own set of problems, both how to practice, transition and coaching. This workshop aims at bringing persons interested in this topic together to share tools, techniques and insights. The workshop will follow the increasingly popular “lightning talk + open space” format.

  1. Using Large-Scale Databases in Evaluation: Advances, Opportunities, and Challenges

    ERIC Educational Resources Information Center

    Penuel, William R.; Means, Barbara

    2011-01-01

    Major advances in the number, capabilities, and quality of state, national, and transnational databases have opened up new opportunities for evaluators. Both large-scale data sets collected for administrative purposes and those collected by other researchers can provide data for a variety of evaluation-related activities. These include (a)…

  2. A novel computational approach towards the certification of large-scale boson sampling

    NASA Astrophysics Data System (ADS)

    Huh, Joonsuk

    Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.

  3. Safeguards instruments for Large-Scale Reprocessing Plants

    SciTech Connect

    Hakkila, E.A.; Case, R.S.; Sonnier, C.

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  4. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  5. Towards a Theory of Metastability in Open Quantum Dynamics.

    PubMed

    Macieszczak, Katarzyna; Guţă, Mădălin; Lesanovsky, Igor; Garrahan, Juan P

    2016-06-17

    By generalizing concepts from classical stochastic dynamics, we establish the basis for a theory of metastability in Markovian open quantum systems. Partial relaxation into long-lived metastable states-distinct from the asymptotic stationary state-is a manifestation of a separation of time scales due to a splitting in the spectrum of the generator of the dynamics. We show here how to exploit this spectral structure to obtain a low dimensional approximation to the dynamics in terms of motion in a manifold of metastable states constructed from the low-lying eigenmatrices of the generator. We argue that the metastable manifold is in general composed of disjoint states, noiseless subsystems, and decoherence-free subspaces.

  6. Challenges and advances in large-scale DFT calculations on GPUs

    NASA Astrophysics Data System (ADS)

    Kulik, Heather

    2014-03-01

    Recent advances in reformulating electronic structure algorithms for stream processors such as graphical processing units have made DFT calculations on systems comprising up to O(103) atoms feasible. Simulations on such systems that previously required half a week on traditional processors can now be completed in only half an hour. Here, we leverage these GPU-accelerated quantum chemistry methods to investigate large-scale quantum mechanical features in protein structure, mechanochemical depolymerization, and the nucleation and growth of heterogeneous nanoparticle structures. In each case, large-scale and rapid evaluation of electronic structure properties is critical for unearthing previously poorly understood properties and mechanistic features of these systems. We will also discuss outstanding challenges in the use of Gaussian localized-basis-set codes on GPUs pertaining to limitations in basis set size and how we circumvent such challenges to computational efficiency with systematic, physics-based error corrections to basis set incompleteness.

  7. Large-scale sequencing and analytical processing of ESTs.

    PubMed

    Mitreva, Makedonka; Mardis, Elaine R

    2009-01-01

    Expressed sequence tags (ESTs) have proven to be one of the most rapid and cost-effective routes to gene discovery for eukaryotic genomes. Furthermore, their multipurpose uses, such as in probe design for microarrays, determining alternative splicing, verifying open reading frames, and confirming exon/intron and gene boundaries, to name a few, further justify their inclusion in many genomic characterization projects. Hence, there has been a constant increase in the number of ESTs deposited into the dbEST division of GenBank. This trend also correlates to ever-improving molecular techniques for obtaining biological material, performing RNA extraction, and constructing cDNA libraries, and predominantly to ever-evolving sequencing chemistry and instrumentation, as well as to decreased sequencing costs. This chapter describes large-scale sequencing of ESTs on two distinct platforms: the ABI 3730xl and the 454 Life Sciences GS20 sequencers, and the corresponding processes of sequence extraction, processing, and submissions to public databases. While the conventional 3730xl sequencing process is described, starting with the plating of an already-existing cDNA library, the section on 454 GS20 pyrosequencing also includes a method for generating full-length cDNA sequences. With appropriate bioinformatics tools, each of these platforms either used independently or coupled together provide a powerful combination for comprehensive exploration of an organism's transcriptome.

  8. Terminology of Large-Scale Waves in the Solar Atmosphere

    NASA Astrophysics Data System (ADS)

    Vršnak, Bojan

    2005-03-01

    This is the fourth in a series of essays on terms used in solar-terrestrial physics that are thought to be in need of clarification. Terms are identified and essays are commissioned by a committee chartered by Division II (Sun and Heliosphere) of the International Astronomical Union. Terminology Committee members include Ed Cliver (chair), Jean-Louis Bougeret, Hilary Cane, Takeo Kosugi, Sara Martin, Rainer Schwenn, and Lidia van Driel-Gestelyi. Authors are asked to review the origins of terms and their current usage/misusage. The goals are to inform the community and to open a discussion. The following article by Bojan Vršnak focuses on terms used to describe large-scale waves in the solar atmosphere, an area of research that has been given great impetus by the images of waves from the Extreme ultraviolet Imaging Telescope (EIT) on board the Solar and Heliospheric Observatory (SOHO). The committee welcomes suggestions for other terms to address in this forum.

  9. Large Scale Integrated Photonics for Twenty-First Century Information Technologies

    NASA Astrophysics Data System (ADS)

    Beausoleil, Raymond G.

    2014-08-01

    In this paper, we will review research done by the Large-Scale Integrated Photonics group at HP Laboratories, and in particular we will discuss applications of optical resonances in dielectric microstructures and nanostructures to future classical and quantum information technologies. Our goal is to scale photonic technologies over the next decade in much the same way as electronics over the past five, thereby establishing a Moore's Law for optics.

  10. Non-Markovian dynamics of open quantum systems

    NASA Astrophysics Data System (ADS)

    Fleming, Chris H.

    An open quantum system is a quantum system that interacts with some environment whose degrees of freedom have been coarse grained away. This model describes non-equilibrium processes more general than scattering-matrix formulations. Furthermore, the microscopically-derived environment provides a model of noise, dissipation and decoherence far more general than Markovian (white noise) models. The latter are fully characterized by Lindblad equations and can be motivated phenomenologically. Non-Markovian processes consistently account for backreaction with the environment and can incorporate effects such as finite temperature and spatial correlations. We consider linear systems with bilinear coupling to the environment, or quantum Brownian motion, and nonlinear systems with weak coupling to the environment. For linear systems we provide exact solutions with analytical results for a variety of spectral densities. Furthermore, we point out an important mathematical subtlety which led to incorrect master-equation coefficients in earlier derivations, given nonlocal dissipation. For nonlinear systems we provide perturbative solutions by translating the formalism of canonical perturbation theory into the context of master equations. It is shown that unavoidable degeneracy causes an unfortunate reduction in accuracy between perturbative master equations and their solutions. We also extend the famous theorem of Lindblad, Gorini, Kossakowski and Sudarshan on completely positivity to non-Markovian master equations. Our application is primarily to model atoms interacting via a common electromagnetic field. The electromagnetic field contains correlations in both space and time, which are related to its relativistic (photon-mediated) nature. As such, atoms residing in the same field experience different environmental effects depending upon their relative position and orientation. Our more accurate solutions were necessary to assess sudden death of entanglement at zero temperature

  11. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  12. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model

  13. Critical relaxation with overdamped quasiparticles in open quantum systems

    NASA Astrophysics Data System (ADS)

    Lang, Johannes; Piazza, Francesco

    2016-09-01

    We study the late-time relaxation following a quench in an open quantum many-body system. We consider the open Dicke model, describing the infinite-range interactions between N atoms and a single, lossy electromagnetic mode. We show that the dynamical phase transition at a critical atom-light coupling is characterized by the interplay between reservoir-driven and intrinsic relaxation processes in the absence of number conservation. Above the critical coupling, small fluctuations in the occupation of the dominant quasiparticle mode start to grow in time, while the quasiparticle lifetime remains finite due to losses. Near the critical interaction strength, we observe a crossover between exponential and power-law 1 /τ relaxation, the latter driven by collisions between quasiparticles. For a quench exactly to the critical coupling, the power-law relaxation extends to infinite times, but the finite lifetime of quasiparticles prevents aging from appearing in two-times response and correlation functions. We predict our results to be accessible to quench experiments with ultracold bosons in optical resonators.

  14. Quantum internet using code division multiple access.

    PubMed

    Zhang, Jing; Liu, Yu-xi; Ozdemir, Sahin Kaya; Wu, Re-Bing; Gao, Feifei; Wang, Xiang-Bin; Yang, Lan; Nori, Franco

    2013-01-01

    A crucial open problem inS large-scale quantum networks is how to efficiently transmit quantum data among many pairs of users via a common data-transmission medium. We propose a solution by developing a quantum code division multiple access (q-CDMA) approach in which quantum information is chaotically encoded to spread its spectral content, and then decoded via chaos synchronization to separate different sender-receiver pairs. In comparison to other existing approaches, such as frequency division multiple access (FDMA), the proposed q-CDMA can greatly increase the information rates per channel used, especially for very noisy quantum channels.

  15. Quantum internet using code division multiple access

    PubMed Central

    Zhang, Jing; Liu, Yu-xi; Özdemir, Şahin Kaya; Wu, Re-Bing; Gao, Feifei; Wang, Xiang-Bin; Yang, Lan; Nori, Franco

    2013-01-01

    A crucial open problem inS large-scale quantum networks is how to efficiently transmit quantum data among many pairs of users via a common data-transmission medium. We propose a solution by developing a quantum code division multiple access (q-CDMA) approach in which quantum information is chaotically encoded to spread its spectral content, and then decoded via chaos synchronization to separate different sender-receiver pairs. In comparison to other existing approaches, such as frequency division multiple access (FDMA), the proposed q-CDMA can greatly increase the information rates per channel used, especially for very noisy quantum channels. PMID:23860488

  16. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  17. Positive Tensor Network Approach for Simulating Open Quantum Many-Body Systems.

    PubMed

    Werner, A H; Jaschke, D; Silvi, P; Kliesch, M; Calarco, T; Eisert, J; Montangero, S

    2016-06-10

    Open quantum many-body systems play an important role in quantum optics and condensed matter physics, and capture phenomena like transport, the interplay between Hamiltonian and incoherent dynamics, and topological order generated by dissipation. We introduce a versatile and practical method to numerically simulate one-dimensional open quantum many-body dynamics using tensor networks. It is based on representing mixed quantum states in a locally purified form, which guarantees that positivity is preserved at all times. Moreover, the approximation error is controlled with respect to the trace norm. Hence, this scheme overcomes various obstacles of the known numerical open-system evolution schemes. To exemplify the functioning of the approach, we study both stationary states and transient dissipative behavior, for various open quantum systems ranging from few to many bodies.

  18. Positive Tensor Network Approach for Simulating Open Quantum Many-Body Systems

    NASA Astrophysics Data System (ADS)

    Werner, A. H.; Jaschke, D.; Silvi, P.; Kliesch, M.; Calarco, T.; Eisert, J.; Montangero, S.

    2016-06-01

    Open quantum many-body systems play an important role in quantum optics and condensed matter physics, and capture phenomena like transport, the interplay between Hamiltonian and incoherent dynamics, and topological order generated by dissipation. We introduce a versatile and practical method to numerically simulate one-dimensional open quantum many-body dynamics using tensor networks. It is based on representing mixed quantum states in a locally purified form, which guarantees that positivity is preserved at all times. Moreover, the approximation error is controlled with respect to the trace norm. Hence, this scheme overcomes various obstacles of the known numerical open-system evolution schemes. To exemplify the functioning of the approach, we study both stationary states and transient dissipative behavior, for various open quantum systems ranging from few to many bodies.

  19. Statistical analysis of mesoscale rainfall: Dependence of a random cascade generator on large-scale forcing

    NASA Technical Reports Server (NTRS)

    Over, Thomas, M.; Gupta, Vijay K.

    1994-01-01

    Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.

  20. Resummation for Nonequilibrium Perturbation Theory and Application to Open Quantum Lattices

    NASA Astrophysics Data System (ADS)

    Li, Andy C. Y.; Petruccione, F.; Koch, Jens

    2016-04-01

    Lattice models of fermions, bosons, and spins have long served to elucidate the essential physics of quantum phase transitions in a variety of systems. Generalizing such models to incorporate driving and dissipation has opened new vistas to investigate nonequilibrium phenomena and dissipative phase transitions in interacting many-body systems. We present a framework for the treatment of such open quantum lattices based on a resummation scheme for the Lindblad perturbation series. Employing a convenient diagrammatic representation, we utilize this method to obtain relevant observables for the open Jaynes-Cummings lattice, a model of special interest for open-system quantum simulation. We demonstrate that the resummation framework allows us to reliably predict observables for both finite and infinite Jaynes-Cummings lattices with different lattice geometries. The resummation of the Lindblad perturbation series can thus serve as a valuable tool in validating open quantum simulators, such as circuit-QED lattices, currently being investigated experimentally.

  1. Observing the Progressive Decoherence of the {open_quote}{open_quote}Meter{close_quote}{close_quote} in a Quantum Measurement

    SciTech Connect

    Brune, M.; Hagley, E.; Dreyer, J.; Maitre, X.; Maali, A.; Wunderlich, C.; Raimond, J.M.; Haroche, S.

    1996-12-01

    A mesoscopic superposition of quantum states involving radiation fields with classically distinct phases was created and its progressive decoherence observed. The experiment involved Rydberg atoms interacting one at a time with a few photon coherent fields trapped in a high {ital Q} microwave cavity. The mesoscopic superposition was the equivalent of an {open_quote}{open_quote}atom+measuringapparatus{close_quote}{close_quote} system in which the {open_quote}{open_quote}meter{close_quote}{close_quote} was pointing simultaneously towards two different directions{emdash}a {open_quote}{open_quote}Schr{umlt o}dinger cat.{close_quote}{close_quote} The decoherence phenomenon transforming this superposition into a statistical mixture was observed while it unfolded, providing a direct insight into a process at the heart of quantum measurement. {copyright} {ital 1996 The American Physical Society.}

  2. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments. PMID:27659986

  3. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  4. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  5. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  6. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  7. Large-scale studies of marked birds in North America

    USGS Publications Warehouse

    Tautin, J.; Metras, L.; Smith, G.

    1999-01-01

    The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.

  8. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  9. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  10. Evidencing `Tight Bound States' in the Hydrogen Atom:. Empirical Manipulation of Large-Scale XD in Violation of QED

    NASA Astrophysics Data System (ADS)

    Amoroso, Richard L.; Vigier, Jean-Pierre

    2013-09-01

    In this work we extend Vigier's recent theory of `tight bound state' (TBS) physics and propose empirical protocols to test not only for their putative existence, but also that their existence if demonstrated provides the 1st empirical evidence of string theory because it occurs in the context of large-scale extra dimensionality (LSXD) cast in a unique M-Theoretic vacuum corresponding to the new Holographic Anthropic Multiverse (HAM) cosmological paradigm. Physicists generally consider spacetime as a stochastic foam containing a zero-point field (ZPF) from which virtual particles restricted by the quantum uncertainty principle (to the Planck time) wink in and out of existence. According to the extended de Broglie-Bohm-Vigier causal stochastic interpretation of quantum theory spacetime and the matter embedded within it is created annihilated and recreated as a virtual locus of reality with a continuous quantum evolution (de Broglie matter waves) governed by a pilot wave - a `super quantum potential' extended in HAM cosmology to be synonymous with the a `force of coherence' inherent in the Unified Field, UF. We consider this backcloth to be a covariant polarized vacuum of the (generally ignored by contemporary physicists) Dirac type. We discuss open questions of the physics of point particles (fermionic nilpotent singularities). We propose a new set of experiments to test for TBS in a Dirac covariant polarized vacuum LSXD hyperspace suggestive of a recently tested special case of the Lorentz Transformation put forth by Kowalski and Vigier. These protocols reach far beyond the recent battery of atomic spectral violations of QED performed through NIST.

  11. Developments in large-scale coastal flood hazard mapping

    NASA Astrophysics Data System (ADS)

    Vousdoukas, Michalis I.; Voukouvalas, Evangelos; Mentaschi, Lorenzo; Dottori, Francesco; Giardino, Alessio; Bouziotas, Dimitrios; Bianchi, Alessandra; Salamon, Peter; Feyen, Luc

    2016-08-01

    Coastal flooding related to marine extreme events has severe socioeconomic impacts, and even though the latter are projected to increase under the changing climate, there is a clear deficit of information and predictive capacity related to coastal flood mapping. The present contribution reports on efforts towards a new methodology for mapping coastal flood hazard at European scale, combining (i) the contribution of waves to the total water level; (ii) improved inundation modeling; and (iii) an open, physics-based framework which can be constantly upgraded, whenever new and more accurate data become available. Four inundation approaches of gradually increasing complexity and computational costs were evaluated in terms of their applicability to large-scale coastal flooding mapping: static inundation (SM); a semi-dynamic method, considering the water volume discharge over the dykes (VD); the flood intensity index approach (Iw); and the model LISFLOOD-FP (LFP). A validation test performed against observed flood extents during the Xynthia storm event showed that SM and VD can lead to an overestimation of flood extents by 232 and 209 %, while Iw and LFP showed satisfactory predictive skill. Application at pan-European scale for the present-day 100-year event confirmed that static approaches can overestimate flood extents by 56 % compared to LFP; however, Iw can deliver results of reasonable accuracy in cases when reduced computational costs are a priority. Moreover, omitting the wave contribution in the extreme total water level (TWL) can result in a ˜ 60 % underestimation of the flooded area. The present findings have implications for impact assessment studies, since combination of the estimated inundation maps with population exposure maps revealed differences in the estimated number of people affected within the 20-70 % range.

  12. Experimental control of the transition from Markovian to non-Markovian dynamics of open quantum systems

    NASA Astrophysics Data System (ADS)

    Liu, Bi-Heng; Li, Li; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can; Laine, Elsi-Mari; Breuer, Heinz-Peter; Piilo, Jyrki

    2011-12-01

    Realistic quantum mechanical systems are always exposed to an external environment. This often induces Markovian processes in which the system loses information to its surroundings. However, many quantum systems exhibit non-Markovian behaviour with a flow of information from the environment back to the system. The environment usually consists of large number of degrees of freedom which are difficult to control, but some sophisticated schemes for reservoir engineering have been developed. The control of open systems plays a decisive role, for example, in proposals for entanglement generation and dissipative quantum computation, for the design of quantum memories and in quantum metrology. Here we report an all-optical experiment which allows one to drive the open system from the Markovian to the non-Markovian regime, to control the information flow between the system and the environment, and to determine the degree of non-Markovianity by measurements on the open system.

  13. Revealing electronic open quantum systems with subsystem TDDFT.

    PubMed

    Krishtal, Alisa; Pavanello, Michele

    2016-03-28

    Open quantum systems (OQSs) are perhaps the most realistic systems one can approach through simulations. In recent years, describing OQSs with Density Functional Theory (DFT) has been a prominent avenue of research with most approaches based on a density matrix partitioning in conjunction with an ad-hoc description of system-bath interactions. We propose a different theoretical approach to OQSs based on partitioning of the electron density. Employing the machinery of subsystem DFT (and its time-dependent extension), we provide a novel way of isolating and analyzing the various terms contributing to the coupling between the system and the surrounding bath. To illustrate the theory, we provide numerical simulations on a toy system (a molecular dimer) and on a condensed phase system (solvated excimer). The simulations show that non-Markovian dynamics in the electronic system-bath interactions are important in chemical applications. For instance, we show that the superexchange mechanism of transport in donor-bridge-acceptor systems is a non-Markovian interaction between the donor-acceptor (OQS) with the bridge (bath) which is fully characterized by real-time subsystem time-dependent DFT. PMID:27036438

  14. Dynamics of quantum tomography in an open system

    NASA Astrophysics Data System (ADS)

    Uchiyama, Chikako

    2015-06-01

    In this study, we provide a way to describe the dynamics of quantum tomography in an open system with a generalized master equation, considering a case where the relevant system under tomographic measurement is influenced by the environment. We apply this to spin tomography because such situations typically occur in μSR (muon spin rotation/relaxation/resonance) experiments where microscopic features of the material are investigated by injecting muons as probes. As a typical example to describe the interaction between muons and a sample material, we use a spin-boson model where the relevant spin interacts with a bosonic environment. We describe the dynamics of a spin tomogram using a time-convolutionless type of generalized master equation that enables us to describe short time scales and/or low-temperature regions. Through numerical evaluation for the case of Ohmic spectral density with an exponential cutoff, a clear interdependency is found between the time evolution of elements of the density operator and a spin tomogram. The formulation in this paper may provide important fundamental information for the analysis of results from, for example, μSR experiments on short time scales and/or in low-temperature regions using spin tomography.

  15. Revealing electronic open quantum systems with subsystem TDDFT

    NASA Astrophysics Data System (ADS)

    Krishtal, Alisa; Pavanello, Michele

    2016-03-01

    Open quantum systems (OQSs) are perhaps the most realistic systems one can approach through simulations. In recent years, describing OQSs with Density Functional Theory (DFT) has been a prominent avenue of research with most approaches based on a density matrix partitioning in conjunction with an ad-hoc description of system-bath interactions. We propose a different theoretical approach to OQSs based on partitioning of the electron density. Employing the machinery of subsystem DFT (and its time-dependent extension), we provide a novel way of isolating and analyzing the various terms contributing to the coupling between the system and the surrounding bath. To illustrate the theory, we provide numerical simulations on a toy system (a molecular dimer) and on a condensed phase system (solvated excimer). The simulations show that non-Markovian dynamics in the electronic system-bath interactions are important in chemical applications. For instance, we show that the superexchange mechanism of transport in donor-bridge-acceptor systems is a non-Markovian interaction between the donor-acceptor (OQS) with the bridge (bath) which is fully characterized by real-time subsystem time-dependent DFT.

  16. Rapid Swept-Wavelength External Cavity Quantum Cascade Laser for Open Path Sensing

    SciTech Connect

    Brumfield, Brian E.; Phillips, Mark C.

    2015-07-01

    A rapidly tunable external cavity quantum cascade laser system is used for open path sensing. The system permits acquisition of transient absorption spectra over a 125 cm-1 tuning range in less than 0.01 s.

  17. Open-loop quantum control as a resource for secure communications

    NASA Astrophysics Data System (ADS)

    Pastorello, Davide

    2016-05-01

    Properties of unitary time evolution of quantum systems can be applied to define quantum cryptographic protocols. Dynamics of a qubit can be exploited as a data encryption/decryption procedure by means of timed measurements, implementation of an open-loop control scheme over a qubit increases robustness of a protocol employing this principle.

  18. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  19. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  20. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  1. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  2. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  3. Human pescadillo induces large-scale chromatin unfolding.

    PubMed

    Zhang, Hao; Fang, Yan; Huang, Cuifen; Yang, Xiao; Ye, Qinong

    2005-06-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  4. Large-Scale Density Functional Theory Transition State Searching in Enzymes.

    PubMed

    Lever, Greg; Cole, Daniel J; Lonsdale, Richard; Ranaghan, Kara E; Wales, David J; Mulholland, Adrian J; Skylaris, Chris-Kriton; Payne, Mike C

    2014-11-01

    Linear-scaling quantum mechanical density functional theory calculations have been applied to study the rearrangement of chorismate to prephenate in large-scale models of the Bacillus subtilis chorismate mutase enzyme. By treating up to 2000 atoms at a consistent quantum mechanical level of theory, we obtain an unbiased, almost parameter-free description of the transition state geometry and energetics. The activation energy barrier is calculated to be lowered by 10.5 kcal mol(-1) in the enzyme, compared with the equivalent reaction in water, which is in good agreement with experiment. Natural bond orbital analysis identifies a number of active site residues that are important for transition state stabilization in chorismate mutase. This benchmark study demonstrates that linear-scaling density functional theory techniques are capable of simulating entire enzymes at the ab initio quantum mechanical level of accuracy.

  5. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  6. Quantum Fisher information flow and non-Markovian processes of open systems

    SciTech Connect

    Lu Xiaoming; Wang Xiaoguang; Sun, C. P.

    2010-10-15

    We establish an information-theoretic approach for quantitatively characterizing the non-Markovianity of open quantum processes. Here, the quantum Fisher information (QFI) flow provides a measure to statistically distinguish Markovian and non-Markovian processes. A basic relation between the QFI flow and non-Markovianity is unveiled for quantum dynamics of open systems. For a class of time-local master equations, the exactly analytic solution shows that for each fixed time the QFI flow is decomposed into additive subflows according to different dissipative channels.

  7. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A.; Gaztanaga, E.

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard {Omega} = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R{sub p} {approximately}20 h{sup {minus}1} Mpc, e.g., low-matter-density (non-zero cosmological constant) models, {open_quote}tilted{close_quote} primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q{sub J} at large scales, r {approx_gt} R{sub p}. Current observational constraints on the three-point amplitudes Q{sub 3} and S{sub 3} can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  8. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  9. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  10. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  11. Large-scale superfluid vortex rings at nonzero temperatures

    NASA Astrophysics Data System (ADS)

    Wacks, D. H.; Baggaley, A. W.; Barenghi, C. F.

    2014-12-01

    We numerically model experiments in which large-scale vortex rings—bundles of quantized vortex loops—are created in superfluid helium by a piston-cylinder arrangement. We show that the presence of a normal-fluid vortex ring together with the quantized vortices is essential to explain the coherence of these large-scale vortex structures at nonzero temperatures, as observed experimentally. Finally we argue that the interaction of superfluid and normal-fluid vortex bundles is relevant to recent investigations of superfluid turbulence.

  12. Quantum Chemistry on Quantum Computers: A Polynomial-Time Quantum Algorithm for Constructing the Wave Functions of Open-Shell Molecules.

    PubMed

    Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji

    2016-08-18

    Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations.

  13. Quantum Chemistry on Quantum Computers: A Polynomial-Time Quantum Algorithm for Constructing the Wave Functions of Open-Shell Molecules.

    PubMed

    Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji

    2016-08-18

    Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations. PMID:27499026

  14. Linearity versus complete positivity of the evolution of open quantum systems

    NASA Astrophysics Data System (ADS)

    Ceballos, Russell R.

    The title may be a bit misleading. Perhaps, "On the Complete Positivity of Reduced Quantum Dynamics," would be a more fitting title. Determining whether or not completely positive (CP) maps are required to describe open system quantum dynamics is an extremely important issue concerning the fundamental mathematical foundations of QM, as well as many other areas of physics. it had been typically believed that only CP maps actually describe the dynamical evolution of open quantum systems, however there has been speculation as to whether this is a strict constraint on the mathematical and physical structure of stochastic quantum dynamical maps. The objective of this thesis is to demonstrate that given a particular unitary operator, an initial system state, a final system state, and the dimension of the environment state, there exists no CP map with a composite system-environment, product initial state that is compatible with the given constraints on the reduced quantum dynamics of the system under investigation.

  15. Large scale in vitro experiment system for 2 GHz exposure.

    PubMed

    Iyama, Takahiro; Ebara, Hidetoshi; Tarusawa, Yoshiaki; Uebayashi, Shinji; Sekijima, Masaru; Nojima, Toshio; Miyakoshi, Junji

    2004-12-01

    A beam formed radiofrequency (RF) exposure-incubator employing a horn antenna, a dielectric lens, and a culture case in an anechoic chamber is developed for large scale in vitro studies. The combination of an open type RF exposure source and a culture case through which RF is transmitted realizes a uniform electric field (+/-1.5 dB) in a 300 x 300 mm area that accommodates 49 35 mm diameter culture dishes. This large culture dish area enables simultaneous RF exposure of a large number of cells or various cell lines. The RF exposure source operates at 2142.5 MHz corresponding to the middle frequency of the downlink band of the International Mobile Telecommunication 2000 (IMT-2000) cellular system. The dielectric lens, which has a gain of 7 dB, focuses RF energy in the direction of the culture case and provides a uniform electric field. The culture case is sealed and connected to the main unit for environmental control, located outside the anechoic chamber, via ducts. The temperature at the center of the tray, which contains the culture dishes in the culture room, is maintained at 37.0 +/- 0.2 degrees C by air circulation. In addition, the appropriate CO2 density and humidity supplied to the culture case realizes stable long-term culture conditions. Specific absorption rate (SAR) dosimetry is performed using an electric field measurement technique and the Finite Difference Time Domain (FDTD) calculation method. The results indicate that the mean SAR of the culture fluid at the bottom of the 49 (7 x 7 array) culture dishes used in the in vitro experiments is 0.175 W/kg for an antenna input power of 1 W and the standard deviation of the SAR distribution is 59%. When only 25 culture dishes (5 x 5 array) are evaluated, the mean SAR is 0.139 W/kg for the same antenna input power and the standard deviation of the SAR distribution is 47%. The proliferation of the H4 cell line in 72 h in a pair of RF exposure-incubators reveals that the culture conditions are equivalent to

  16. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  17. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  18. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  19. Potential and issues in large scale flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Di Baldassarre, Giuliano; Brandimarte, Luigia; Dottori, Francesco; Mazzoleni, Maurizio; Yan, Kun

    2015-04-01

    The last years have seen a growing research interest on large scale flood inundation modelling. Nowadays, modelling tools and datasets allow for analyzing flooding processes at regional, continental and even global scale with an increasing level of detail. As a result, several research works have already addressed this topic using different methodologies of varying complexity. The potential of these studies is certainly enormous. Large scale flood inundation modelling can provide valuable information in areas where few information and studies were previously available. They can provide a consistent framework for a comprehensive assessment of flooding processes in the river basins of world's large rivers, as well as impacts of future climate scenarios. To make the most of such a potential, we believe it is necessary, on the one hand, to understand strengths and limitations of the existing methodologies, and on the other hand, to discuss possibilities and implications of using large scale flood models for operational flood risk assessment and management. Where should researchers put their effort, in order to develop useful and reliable methodologies and outcomes? How the information coming from large scale flood inundation studies can be used by stakeholders? How should we use this information where previous higher resolution studies exist, or where official studies are available?

  20. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  1. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  2. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  3. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  4. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  5. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  6. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  7. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney-Hasegawa-Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  8. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  9. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  10. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  11. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  12. Large-scale data analysis using the Wigner function

    NASA Astrophysics Data System (ADS)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  13. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  14. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable. PMID:25162863

  15. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  16. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  17. Simulation and Analysis of Large-Scale Compton Imaging Detectors

    SciTech Connect

    Manini, H A; Lange, D J; Wright, D M

    2006-12-27

    We perform simulations of two types of large-scale Compton imaging detectors. The first type uses silicon and germanium detector crystals, and the second type uses silicon and CdZnTe (CZT) detector crystals. The simulations use realistic detector geometry and parameters. We analyze the performance of each type of detector, and we present results using receiver operating characteristics (ROC) curves.

  18. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  19. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  20. CACHE Guidelines for Large-Scale Computer Programs.

    ERIC Educational Resources Information Center

    National Academy of Engineering, Washington, DC. Commission on Education.

    The Computer Aids for Chemical Engineering Education (CACHE) guidelines identify desirable features of large-scale computer programs including running cost and running-time limit. Also discussed are programming standards, documentation, program installation, system requirements, program testing, and program distribution. Lists of types of…

  1. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  2. The Role of Plausible Values in Large-Scale Surveys

    ERIC Educational Resources Information Center

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  3. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  4. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  5. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  6. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  7. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  8. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  9. Improving the Utility of Large-Scale Assessments in Canada

    ERIC Educational Resources Information Center

    Rogers, W. Todd

    2014-01-01

    Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…

  10. Real-time transport in open quantum systems from PT-symmetric quantum mechanics

    NASA Astrophysics Data System (ADS)

    Elenewski, Justin E.; Chen, Hanning

    2014-08-01

    Nanoscale electronic transport is of intense technological interest, with applications ranging from semiconducting devices and molecular junctions to charge migration in biological systems. Most explicit theoretical approaches treat transport using a combination of density functional theory (DFT) and nonequilibrium Green's functions. This is a static formalism, with dynamic response properties accommodated only through complicated extensions. To circumvent this limitation, the carrier density may be propagated using real-time time-dependent DFT (RT-TDDFT), with boundary conditions corresponding to an open quantum system. Complex absorbing potentials can emulate outgoing particles at the simulation boundary, although these do not account for introduction of charge density. It is demonstrated that the desired positive particle flux is afforded by a class of PT-symmetric generating potentials that are characterized by anisotropic transmission resonances. These potentials add density every time a particle traverses the cell boundary, and may be used to engineer a continuous pulse train for incident packets. This is a first step toward developing a complete transport formalism unique to RT-TDDFT.

  11. Networks of silicon nanowires: A large-scale atomistic electronic structure analysis

    SciTech Connect

    Keleş, Ümit; Bulutay, Ceyhun; Liedke, Bartosz; Heinig, Karl-Heinz

    2013-11-11

    Networks of silicon nanowires possess intriguing electronic properties surpassing the predictions based on quantum confinement of individual nanowires. Employing large-scale atomistic pseudopotential computations, as yet unexplored branched nanostructures are investigated in the subsystem level as well as in full assembly. The end product is a simple but versatile expression for the bandgap and band edge alignments of multiply-crossing Si nanowires for various diameters, number of crossings, and wire orientations. Further progress along this line can potentially topple the bottom-up approach for Si nanowire networks to a top-down design by starting with functionality and leading to an enabling structure.

  12. A survey on routing protocols for large-scale wireless sensor networks.

    PubMed

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed. PMID:22163808

  13. Ultra-large-scale Cosmology in Next-generation Experiments with Single Tracers

    NASA Astrophysics Data System (ADS)

    Alonso, David; Bull, Philip; Ferreira, Pedro G.; Maartens, Roy; Santos, Mário G.

    2015-12-01

    Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for the most relevant future large-scale structure experiments: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and radio continuum surveys. Our forecasts show that next-generation experiments, reaching out to redshifts z≃ 4, will not be able to detect previously undetected general-relativistic effects by using individual tracers of the density field, although the contribution of weak lensing magnification on large scales should be clearly detectable. We also perform a rigorous joint forecast for the detection of primordial non-Gaussianity through the excess power it produces in the clustering of biased tracers on large scales, finding that uncertainties of σ ({f}{{NL}})∼ 1-2 should be achievable. We study the level of degeneracy of these large-scale effects with several tracer-dependent nuisance parameters, quantifying the minimal priors on the latter that are needed for an optimal measurement of the former. Finally, we discuss the systematic effects that must be mitigated to achieve this level of sensitivity, and some alternative approaches that should help to improve the constraints. The computational tools developed to carry out this study, which requires the full-sky computation of the theoretical angular power spectra for {O}(100) redshift bins, as well as realistic models of the luminosity function, are publicly available at http://intensitymapping.physics.ox.ac.uk/codes.html.

  14. Do large-scale assessments measure students' ability to integrate scientific knowledge?

    NASA Astrophysics Data System (ADS)

    Lee, Hee-Sun

    2010-03-01

    Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.

  15. Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase

    NASA Astrophysics Data System (ADS)

    Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele

    2015-02-01

    Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.

  16. Equivalence of matrix product ensembles of trajectories in open quantum systems.

    PubMed

    Kiukas, Jukka; Guţă, Mădălin; Lesanovsky, Igor; Garrahan, Juan P

    2015-07-01

    The equivalence of thermodynamic ensembles is at the heart of statistical mechanics and central to our understanding of equilibrium states of matter. Recently, a formal connection has been established between the dynamics of open quantum systems and statistical mechanics in an extra dimension: an open system dynamics generates a matrix product state (MPS) encoding all possible quantum jump trajectories which allows to construct generating functions akin to partition functions. For dynamics generated by a Lindblad master equation, the corresponding MPS is a so-called continuous MPS which encodes the set of continuous measurement records terminated at some fixed total observation time. Here, we show that if one instead terminates trajectories after a fixed total number of quantum jumps, e.g., emission events into the environment, the associated MPS is discrete. The continuous and discrete MPS correspond to different ensembles of quantum trajectories, one characterized by total time, the other by total number of quantum jumps. Hence, they give rise to quantum versions of different thermodynamic ensembles, akin to "grand canonical" and "isobaric," but for trajectories. Here, we prove that these trajectory ensembles are equivalent in a suitable limit of long time or large number of jumps. This is in direct analogy to equilibrium statistical mechanics where equivalence between ensembles is only strictly established in the thermodynamic limit. An intrinsic quantum feature is that the equivalence holds only for all observables that commute with the number of quantum jumps. PMID:26274149

  17. Atomistic Origin of Brittle Failure of Boron Carbide from Large-Scale Reactive Dynamics Simulations: Suggestions toward Improved Ductility.

    PubMed

    An, Qi; Goddard, William A

    2015-09-01

    Ceramics are strong, but their low fracture toughness prevents extended engineering applications. In particular, boron carbide (B(4)C), the third hardest material in nature, has not been incorporated into many commercial applications because it exhibits anomalous failure when subjected to hypervelocity impact. To determine the atomistic origin of this brittle failure, we performed large-scale (∼200,000  atoms/cell) reactive-molecular-dynamics simulations of shear deformations of B(4)C, using the quantum-mechanics-derived reactive force field simulation. We examined the (0001)/⟨101̅0⟩ slip system related to deformation twinning and the (011̅1̅)/⟨1̅101⟩ slip system related to amorphous band formation. We find that brittle failure in B(4)C arises from formation of higher density amorphous bands due to fracture of the icosahedra, a unique feature of these boron based materials. This leads to negative pressure and cavitation resulting in crack opening. Thus, to design ductile materials based on B(4)C we propose alloying aimed at promoting shear relaxation through intericosahedral slip that avoids icosahedral fracture.

  18. Atomistic Origin of Brittle Failure of Boron Carbide from Large-Scale Reactive Dynamics Simulations: Suggestions toward Improved Ductility

    NASA Astrophysics Data System (ADS)

    An, Qi; Goddard, William A.

    2015-09-01

    Ceramics are strong, but their low fracture toughness prevents extended engineering applications. In particular, boron carbide (B4C ), the third hardest material in nature, has not been incorporated into many commercial applications because it exhibits anomalous failure when subjected to hypervelocity impact. To determine the atomistic origin of this brittle failure, we performed large-scale (˜200 000 atoms /cell ) reactive-molecular-dynamics simulations of shear deformations of B4C , using the quantum-mechanics-derived reactive force field simulation. We examined the (0001 )/⟨10 1 ¯ 0 ⟩ slip system related to deformation twinning and the (01 1 ¯ 1 ¯ )/⟨1 ¯ 101 ⟩ slip system related to amorphous band formation. We find that brittle failure in B4C arises from formation of higher density amorphous bands due to fracture of the icosahedra, a unique feature of these boron based materials. This leads to negative pressure and cavitation resulting in crack opening. Thus, to design ductile materials based on B4C we propose alloying aimed at promoting shear relaxation through intericosahedral slip that avoids icosahedral fracture.

  19. Quantum dynamical field theory for nonequilibrium phase transitions in driven open systems

    NASA Astrophysics Data System (ADS)

    Marino, Jamir; Diehl, Sebastian

    2016-08-01

    We develop a quantum dynamical field theory for studying phase transitions in driven open systems coupled to Markovian noise, where nonlinear noise effects and fluctuations beyond semiclassical approximations influence the critical behavior. We systematically compare the diagrammatics, the properties of the renormalization group flow, and the structure of the fixed points of the quantum dynamical field theory and of its semiclassical counterpart, which is employed to characterize dynamical criticality in three-dimensional driven-dissipative condensates. As an application, we perform the Keldysh functional renormalization of a one-dimensional driven open Bose gas, where a tailored diffusion Markov noise realizes an analog of quantum criticality for driven-dissipative condensation. We find that the associated nonequilibrium quantum phase transition does not map into the critical behavior of its three-dimensional classical driven counterpart.

  20. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  1. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  2. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  3. Electron drift in a large scale solid xenon

    DOE PAGES

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  4. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  5. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  6. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  7. Lagrangian space consistency relation for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  8. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  9. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  10. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    The overall objective of the work that was conducted was to understand the present-day large-scale deformations of the crust throughout the western United States and in so doing to improve our ability to assess the potential for seismic hazards in this region. To address this problem, we used a large collection of Global Positioning System (GPS) networks which spans the region to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our results can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  11. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  12. Startup of large-scale projects casts spotlight on IGCC

    SciTech Connect

    Swanekamp, R.

    1996-06-01

    With several large-scale plants cranking up this year, integrated coal gasification/combined cycle (IGCC) appears poised for growth. The technology may eventually help coal reclaim its former prominence in new plant construction, but developers worldwide are eyeing other feedstocks--such as petroleum coke or residual oil. Of the so-called advanced clean-coal technologies, integrated (IGCC) appears to be having a defining year. Of three large-scale demonstration plants in the US, one is well into startup, a second is expected to begin operating in the fall, and a third should startup by the end of the year; worldwide, over a dozen more projects are in the works. In Italy, for example, several large projects using petroleum coke or refinery residues as feedstocks are proceeding, apparently on a project-finance basis.

  13. Considerations of large scale impact and the early Earth

    NASA Technical Reports Server (NTRS)

    Grieve, R. A. F.; Parmentier, E. M.

    1985-01-01

    Bodies which have preserved portions of their earliest crust indicate that large scale impact cratering was an important process in early surface and upper crustal evolution. Large impact basins form the basic topographic, tectonic, and stratigraphic framework of the Moon and impact was responsible for the characteristics of the second order gravity field and upper crustal seismic properties. The Earth's crustal evolution during the first 800 my of its history is conjectural. The lack of a very early crust may indicate that thermal and mechanical instabilities resulting from intense mantle convection and/or bombardment inhibited crustal preservation. Whatever the case, the potential effects of large scale impact have to be considered in models of early Earth evolution. Preliminary models of the evolution of a large terrestrial impact basin was derived and discussed in detail.

  14. The large-scale anisotropy with the PAMELA calorimeter

    NASA Astrophysics Data System (ADS)

    Karelin, A.; Adriani, O.; Barbarino, G.; Bazilevskaya, G.; Bellotti, R.; Boezio, M.; Bogomolov, E.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carbone, R.; Carlson, P.; Casolino, M.; Castellini, G.; De Donato, C.; De Santis, C.; De Simone, N.; Di Felice, V.; Formato, V.; Galper, A.; Koldashov, S.; Koldobskiy, S.; Krut'kov, S.; Kvashnin, A.; Leonov, A.; Malakhov, V.; Marcelli, L.; Martucci, M.; Mayorov, A.; Menn, W.; Mergé, M.; Mikhailov, V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Munini, R.; Osteria, G.; Palma, F.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S.; Sarkar, R.; Simon, M.; Scotti, V.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S.; Yurkin, Y.; Zampa, G.; Zampa, N.

    2015-10-01

    The large-scale anisotropy (or the so-called star-diurnal wave) has been studied using the calorimeter of the space-born experiment PAMELA. The cosmic ray anisotropy has been obtained for the Southern and Northern hemispheres simultaneously in the equatorial coordinate system for the time period 2006-2014. The dipole amplitude and phase have been measured for energies 1-20 TeV n-1.

  15. Report on large scale molten core/magnesia interaction test

    SciTech Connect

    Chu, T.Y.; Bentz, J.H.; Arellano, F.E.; Brockmann, J.E.; Field, M.E.; Fish, J.D.

    1984-08-01

    A molten core/material interaction experiment was performed at the Large-Scale Melt Facility at Sandia National Laboratories. The experiment involved the release of 230 kg of core melt, heated to 2923/sup 0/K, into a magnesia brick crucible. Descriptions of the facility, the melting technology, as well as results of the experiment, are presented. Preliminary evaluations of the results indicate that magnesia brick can be a suitable material for core ladle construction.

  16. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  17. Simulating Weak Lensing by Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vale, Chris; White, Martin

    2003-08-01

    We model weak gravitational lensing of light by large-scale structure using ray tracing through N-body simulations. The method is described with particular attention paid to numerical convergence. We investigate some of the key approximations in the multiplane ray-tracing algorithm. Our simulated shear and convergence maps are used to explore how well standard assumptions about weak lensing hold, especially near large peaks in the lensing signal.

  18. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  19. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  20. The Large-scale Structure of Scientific Method

    NASA Astrophysics Data System (ADS)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of scientific method can reveal the global interconnectedness of scientific knowledge that is an essential part of what makes science scientific.

  1. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  2. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  3. Multivariate Clustering of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  4. Multivariate Clustering of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-03-04

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  5. Large-scale Alfvén vortices

    SciTech Connect

    Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  6. Relic vector field and CMB large scale anomalies

    SciTech Connect

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  7. Large-scale Alfvén vortices

    NASA Astrophysics Data System (ADS)

    Onishchenko, O. G.; Pokhotelov, O. A.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-01

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  8. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  9. Practical algorithms to facilitate large-scale first-principles molecular dynamics

    NASA Astrophysics Data System (ADS)

    Gygi, François; Duchemin, Ivan; Donadio, Davide; Galli, Giulia

    2009-07-01

    Running First-Principles Molecular Dynamics (FPMD) simulations on large parallel platforms presents a number of practical challenges related to the large size of the datasets generated during simulations and to the lack of flexibility of parallel FPMD codes for the implementation of new algorithms. In this paper, we present two approaches implemented in the Qbox code, that alleviate these problems and facilitate large-scale FPMD simulations. We first describe a parallel I/O strategy based on MPI-IO that fully exploits the performance of parallel file systems. We also describe the implementation of a client-server interface that enables coupled quantum-classical computations in which the quantum simulation run by Qbox is "driven" by another application. This feature can be used to couple FPMD simulations with other simulation methods such as e.g. Path Integral Monte Carlo, thermodynamic integration or replica exchange dynamics.

  10. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  11. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  12. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host.

  13. Large scale anisotropy of UHECRs for the Telescope Array

    SciTech Connect

    Kido, E.

    2011-09-22

    The origin of Ultra High Energy Cosmic Rays (UHECRs) is one of the most interesting questions in astroparticle physics. Despite of the efforts by other previous measurements, there is no consensus of both of the origin and the mechanism of UHECRs generation and propagation yet. In this context, Telescope Array (TA) experiment is expected to play an important role as the largest detector in the northern hemisphere which consists of an array of surface particle detectors (SDs) and fluorescence detectors (FDs) and other important calibration devices. We searched for large scale anisotropy using SD data of TA. UHECRs are expected to be restricted in GZK horizon when the composition of UHECRs is proton, so the observed arrival directions are expected to exhibit local large scale anisotropy if UHECR sources are some astrophysical objects. We used the SD data set from 11 May 2008 to 7 September 2010 to search for large-scale anisotropy. The discrimination power between LSS and isotropy is not enough yet, but the statistics in TA is expected to discriminate between those in about 95% confidence level on average in near future.

  14. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  15. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. PMID:25731989

  16. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies.

  17. A visualization framework for large-scale virtual astronomy

    NASA Astrophysics Data System (ADS)

    Fu, Chi-Wing

    Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.

  18. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  19. Impact of Large-scale Geological Architectures On Recharge

    NASA Astrophysics Data System (ADS)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  20. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  1. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  2. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  3. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  4. Large-scale data mining pilot project in human genome

    SciTech Connect

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  5. Developing A Large-Scale, Collaborative, Productive Geoscience Education Network

    NASA Astrophysics Data System (ADS)

    Manduca, C. A.; Bralower, T. J.; Egger, A. E.; Fox, S.; Ledley, T. S.; Macdonald, H.; Mcconnell, D. A.; Mogk, D. W.; Tewksbury, B. J.

    2012-12-01

    Over the past 15 years, the geoscience education community has grown substantially and developed broad and deep capacity for collaboration and dissemination of ideas. While this community is best viewed as emergent from complex interactions among changing educational needs and opportunities, we highlight the role of several large projects in the development of a network within this community. In the 1990s, three NSF projects came together to build a robust web infrastructure to support the production and dissemination of on-line resources: On The Cutting Edge (OTCE), Earth Exploration Toolbook, and Starting Point: Teaching Introductory Geoscience. Along with the contemporaneous Digital Library for Earth System Education, these projects engaged geoscience educators nationwide in exploring professional development experiences that produced lasting on-line resources, collaborative authoring of resources, and models for web-based support for geoscience teaching. As a result, a culture developed in the 2000s in which geoscience educators anticipated that resources for geoscience teaching would be shared broadly and that collaborative authoring would be productive and engaging. By this time, a diverse set of examples demonstrated the power of the web infrastructure in supporting collaboration, dissemination and professional development . Building on this foundation, more recent work has expanded both the size of the network and the scope of its work. Many large research projects initiated collaborations to disseminate resources supporting educational use of their data. Research results from the rapidly expanding geoscience education research community were integrated into the Pedagogies in Action website and OTCE. Projects engaged faculty across the nation in large-scale data collection and educational research. The Climate Literacy and Energy Awareness Network and OTCE engaged community members in reviewing the expanding body of on-line resources. Building Strong

  6. Data management strategies for multinational large-scale systems biology projects.

    PubMed

    Wruck, Wasco; Peuker, Martin; Regenbrecht, Christian R A

    2014-01-01

    Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don't Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects.

  7. Rise and fall of quantum and classical correlations in open-system dynamics

    SciTech Connect

    Khasin, Michael; Kosloff, Ronnie

    2007-07-15

    Interacting quantum systems evolving from an uncorrelated composite initial state generically develop quantum correlations--entanglement. As a consequence, a local description of interacting quantum systems is impossible as a rule. A unitarily evolving (isolated) quantum system generically develops extensive entanglement: the magnitude of the generated entanglement will increase without bounds with the effective Hilbert space dimension of the system. It is conceivable that coupling of the interacting subsystems to local dephasing environments will restrict the generation of entanglement to such extent that the evolving composite system may be considered as approximately disentangled. This conjecture is addressed in the context of some common models of a bipartite system with linear and nonlinear interactions and local coupling to dephasing environments. Analytical and numerical results obtained imply that the conjecture is generally false. Open dynamics of the quantum correlations is compared to the corresponding evolution of the classical correlations and a qualitative difference is found.

  8. Non-Markovian correlation functions for open quantum systems

    NASA Astrophysics Data System (ADS)

    Jin, Jinshuang; Karlewski, Christian; Marthaler, Michael

    2016-08-01

    Beyond the conventional quantum regression theorem, a general formula for non-Markovian correlation functions of arbitrary system operators both in the time- and frequency-domain is given. We approach the problem by transforming the conventional time-non-local master equation into dispersed time-local equations-of-motion. The validity of our approximations is discussed and we find that the non-Markovian terms have to be included for short times. While calculations of the density matrix at short times suffer from the initial value problem, a correlation function has a well defined initial state. The resulting formula for the non-Markovian correlation function has a simple structure and is as convenient in its application as the conventional quantum regression theorem for the Markovian case. For illustrations, we apply our method to investigate the spectrum of the current fluctuations of interacting quantum dots contacted with two electrodes. The corresponding non-Markovian characteristics are demonstrated.

  9. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  10. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  11. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  12. TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY

    SciTech Connect

    Wang Xin; Chen Xuelei; Park, Changbom

    2012-03-01

    The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.

  13. Towards a self-consistent halo model for the nonlinear large-scale structure

    NASA Astrophysics Data System (ADS)

    Schmidt, Fabian

    2016-03-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.

  14. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain.

    PubMed

    Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G

    2016-05-25

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.

  15. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed. PMID:22163808

  16. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain

    PubMed Central

    Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.

    2016-01-01

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162

  17. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain.

    PubMed

    Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G

    2016-01-01

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162

  18. Solving large scale structure in ten easy steps with COLA

    SciTech Connect

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J. E-mail: matiasz@ias.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  19. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    SciTech Connect

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  20. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  1. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  2. Statistical analysis of large-scale neuronal recording data

    PubMed Central

    Reed, Jamie L.; Kaas, Jon H.

    2010-01-01

    Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395

  3. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  4. Infectious diseases in large-scale cat hoarding investigations.

    PubMed

    Polak, K C; Levy, J K; Crawford, P C; Leutenegger, C M; Moriello, K A

    2014-08-01

    Animal hoarders accumulate animals in over-crowded conditions without adequate nutrition, sanitation, and veterinary care. As a result, animals rescued from hoarding frequently have a variety of medical conditions including respiratory infections, gastrointestinal disease, parasitism, malnutrition, and other evidence of neglect. The purpose of this study was to characterize the infectious diseases carried by clinically affected cats and to determine the prevalence of retroviral infections among cats in large-scale cat hoarding investigations. Records were reviewed retrospectively from four large-scale seizures of cats from failed sanctuaries from November 2009 through March 2012. The number of cats seized in each case ranged from 387 to 697. Cats were screened for feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) in all four cases and for dermatophytosis in one case. A subset of cats exhibiting signs of upper respiratory disease or diarrhea had been tested for infections by PCR and fecal flotation for treatment planning. Mycoplasma felis (78%), calicivirus (78%), and Streptococcus equi subspecies zooepidemicus (55%) were the most common respiratory infections. Feline enteric coronavirus (88%), Giardia (56%), Clostridium perfringens (49%), and Tritrichomonas foetus (39%) were most common in cats with diarrhea. The seroprevalence of FeLV and FIV were 8% and 8%, respectively. In the one case in which cats with lesions suspicious for dermatophytosis were cultured for Microsporum canis, 69/76 lesional cats were culture-positive; of these, half were believed to be truly infected and half were believed to be fomite carriers. Cats from large-scale hoarding cases had high risk for enteric and respiratory infections, retroviruses, and dermatophytosis. Case responders should be prepared for mass treatment of infectious diseases and should implement protocols to prevent transmission of feline or zoonotic infections during the emergency response and when

  5. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  6. Large-scale molten core/material interaction experiments

    SciTech Connect

    Chu, T.Y.

    1984-01-01

    The paper described the facility and melting technology for large-scale molten core/material interaction experiments being carried out at Sandia National Laboratories. The facility is largest of its kind anywhere. It is capable of producing core melts up to 500 kg at a temperature of 3000/sup 0/K. Results of a recent experiment involving the release of 230 kg of core melt into a magnesia brick crucible is discussed in detail. Data on thermal and mechanical responses of magnesia brick, heat flux partitioning, melt penetration, gas and aerosol generation are presented.

  7. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    NASA Astrophysics Data System (ADS)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  8. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill.

  9. Locally Biased Galaxy Formation and Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Narayanan, Vijay K.; Berlind, Andreas A.; Weinberg, David H.

    2000-01-01

    We examine the influence of the morphology-density relation and a wide range of simple models for biased galaxy formation on statistical measures of large-scale structure. We contrast the behavior of local biasing models, in which the efficiency of galaxy formation is determined by the density, geometry, or velocity dispersion of the local mass distribution, with that of nonlocal biasing models, in which galaxy formation is modulated coherently over scales larger than the galaxy correlation length. If morphological segregation of galaxies is governed by a local morphology-density relation, then the correlation function of E/S0 galaxies should be steeper and stronger than that of spiral galaxies on small scales, as observed, while on large scales the E/S0 and spiral galaxies should have correlation functions with the same shape but different amplitudes. Similarly, all of our local bias models produce scale-independent amplification of the correlation function and power spectrum in the linear and mildly nonlinear regimes; only a nonlocal biasing mechanism can alter the shape of the power spectrum on large scales. Moments of the biased galaxy distribution retain the hierarchical pattern of the mass moments, but biasing alters the values and scale dependence of the hierarchical amplitudes S3 and S4. Pair-weighted moments of the galaxy velocity distribution are sensitive to the details of the bias prescription even if galaxies have the same local velocity distribution as the underlying dark matter. The nonlinearity of the relation between galaxy density and mass density depends on the biasing prescription and the smoothing scale, and the scatter in this relation is a useful diagnostic of the physical parameters that determine the bias. While the assumption that galaxy formation is governed by local physics leads to some important simplifications on large scales, even local biasing is a multifaceted phenomenon whose impact cannot be described by a single parameter or

  10. Why large-scale seasonal streamflow forecasts are feasible

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Candogan Yossef, N.; Van Beek, L. P.

    2011-12-01

    Seasonal forecasts of precipitation and temperature, using either statistical or dynamic prediction, have been around for almost 2 decades. The skill of these forecasts differ both in space and time, with highest skill in areas heavily influenced by SST anomalies such as El Nino or areas where land surface properties have a major impact on e.g. Monsoon strength, such as the vegetation cover of the Sahel region or the snow cover of the Tibetan plateau. However, the skill of seasonal forecasts is limited in most regions, with anomaly correlation coefficients varying between 0.2 and 0.5 for 1-3 month precipitation totals. This raises the question whether seasonal hydrological forecasting is feasible. Here, we make the case that it is. Using the example of statistical forecasts of NAO-strength and related precipitation anomalies over Europe, we show that the skill of large-scale streamflow forecasts is generally much higher than the precipitation forecasts itself, provided that the initial state of the system is accurately estimated. In the latter case, even the precipitation climatology can produce skillful results. This is due to the inertia of the hydrological system rooted in the storage of soil moisture, groundwater and snow pack, as corroborated by a recent study using snow observations for seasonal streamflow forecasting in the Western US. These examples seem to suggest that for accurate seasonal hydrological forecasting, correct state estimation is more important than accurate seasonal meteorological forecasts. However, large-scale estimation of hydrological states is difficult and validation of large-scale hydrological models often reveals large biases in e.g. streamflow estimates. Fortunately, as shown with a validation study of the global model PCR-GLOBWB, these biases are of less importance when seasonal forecasts are evaluated in terms of their ability to reproduce anomalous flows and extreme events, i.e. by anomaly correlations or categorical quantile

  11. Towards large scale production and separation of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Alvarez, Noe T.

    Since their discovery, carbon nanotubes (CNTs) have boosted the research and applications of nanotechnology; however, many applications of CNTs are inaccessible because they depend upon large-scale CNT production and separations. Type, chirality and diameter control of CNTs determine many of their physical properties, and such control is still not accesible. This thesis studies the fundamentals for scalable selective reactions of HiPCo CNTs as well as the early phase of routes to an inexpensive approach for large-scale CNT production. In the growth part, this thesis covers a complete wet-chemistry process of catalyst and catalyst support deposition for growth of vertically aligned (VA) CNTs. A wet-chemistry preparation process has significant importance for CNT synthesis through chemical vapor deposition (CVD). CVD is by far, the most suitable and inexpensive process for large-scale CNT production when compared to other common processes such as laser ablation and arc discharge. However, its potential has been limited by low-yielding and difficult preparation processes of catalyst and its support, therefore its competitiveness has been reduced. The wet-chemistry process takes advantage of current nanoparticle technology to deposit the catalyst and the catalyst support as a thin film of nanoparticles, making the protocol simple compared to electron beam evaporation and sputtering processes. In the CNT selective reactions part, this thesis studies UV irradiation of individually dispersed HiPCo CNTs that generates auto-selective reactions in the liquid phase with good control over their diameter and chirality. This technique is ideal for large-scale and continuous-process of separations of CNTs by diameter and type. Additionally, an innovative simple catalyst deposition through abrasion is demonstrated. Simple friction between the catalyst and the substrates deposit a high enough density of metal catalyst particles for successful CNT growth. This simple approach has

  12. Novel algorithm of large-scale simultaneous linear equations.

    PubMed

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-02-24

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  13. Generation of Large-Scale Winds in Horizontally Anisotropic Convection.

    PubMed

    von Hardenberg, J; Goluskin, D; Provenzale, A; Spiegel, E A

    2015-09-25

    We simulate three-dimensional, horizontally periodic Rayleigh-Bénard convection, confined between free-slip horizontal plates and rotating about a distant horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind. PMID:26451558

  14. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  15. Search for Large Scale Anisotropies with the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Bonino, R.; Pierre Auger Collaboration

    The Pierre Auger Observatory studies the nature and the origin of Ultra High Energy Cosmic Rays (>3\\cdot1018 eV). Completed at the end of 2008, it has been continuously operating for more than six years. Using data collected from 1 January 2004 until 31 March 2009, we search for large scale anisotropies with two complementary analyses in different energy windows. No significant anisotropies are observed, resulting in bounds on the first harmonic amplitude at the 1% level at EeV energies.

  16. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  17. Evaluation of uncertainty in large-scale fusion metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Fumin; Qu, Xinghua; Wu, Hongyan; Ye, Shenghua

    2008-12-01

    The expression system of uncertainty in conventional scale has been perfect, however, due to varies of error sources, it is still hard to obtain the uncertainty of large-scale instruments by common methods. In this paper, the uncertainty is evaluated by Monte Carlo simulation. The point-clouds created by this method are shown through computer visualization and point by point analysis is made. Thus, in fusion measurement, apart from the uncertainty of every instrument being expressed directly, the contribution every error source making for the whole uncertainty becomes easy to calculate. Finally, the application of this method in measuring tunnel component is given.

  18. Large-scale sodium spray fire code validation (SOFICOV) test

    SciTech Connect

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m/sup 3/ Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes.

  19. Large-Scale Compton Imaging for Wide-Area Surveillance

    SciTech Connect

    Lange, D J; Manini, H A; Wright, D M

    2006-03-01

    We study the performance of a large-scale Compton imaging detector placed in a low-flying aircraft, used to search wide areas for rad/nuc threat sources. In this paper we investigate the performance potential of equipping aerial platforms with gamma-ray detectors that have photon sensitivity up to a few MeV. We simulate the detector performance, and present receiver operating characteristics (ROC) curves for a benchmark scenario using a {sup 137}Cs source. The analysis uses a realistic environmental background energy spectrum and includes air attenuation.

  20. Frequency domain multiplexing for large-scale bolometer arrays

    SciTech Connect

    Spieler, Helmuth

    2002-05-31

    The development of planar fabrication techniques for superconducting transition-edge sensors has brought large-scale arrays of 1000 pixels or more to the realm of practicality. This raises the problem of reading out a large number of sensors with a tractable number of connections. A possible solution is frequency-domain multiplexing. I summarize basic principles, present various circuit topologies, and discuss design trade-offs, noise performance, cross-talk and dynamic range. The design of a practical device and its readout system is described with a discussion of fabrication issues, practical limits and future prospects.

  1. Radiative shocks on large scale lasers. Preliminary results

    NASA Astrophysics Data System (ADS)

    Leygnac, S.; Bouquet, S.; Stehle, C.; Barroso, P.; Batani, D.; Benuzzi, A.; Cathala, B.; Chièze, J.-P.; Fleury, X.; Grandjouan, N.; Grenier, J.; Hall, T.; Henry, E.; Koenig, M.; Lafon, J. P. J.; Malka, V.; Marchet, B.; Merdji, H.; Michaut, C.; Poles, L.; Thais, F.

    2001-05-01

    Radiative shocks, those structure is strongly influenced by the radiation field, are present in various astrophysical objects (circumstellar envelopes of variable stars, supernovae ...). Their modeling is very difficult and thus will take benefit from experimental informations. This approach is now possible using large scale lasers. Preliminary experiments have been performed with the nanosecond LULI laser at Ecole Polytechnique (France) in 2000. A radiative shock has been obtained in a low pressure xenon cell. The preparation of such experiments and their interpretation is performed using analytical calculations and numerical simulations.

  2. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  3. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  4. [National Strategic Promotion for Large-Scale Clinical Cancer Research].

    PubMed

    Toyama, Senya

    2016-04-01

    The number of clinical research by clinical cancer study groups has been decreasing this year in Japan. They say the reason is the abolition of donations to the groups from the pharmaceutical companies after the Diovan scandal. But I suppose fundamental problem is that government-supported large-scale clinical cancer study system for evidence based medicine (EBM) has not been fully established. An urgent establishment of the system based on the national strategy is needed for the cancer patients and the public health promotion.

  5. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  6. Large-scale deformation associated with ridge subduction

    USGS Publications Warehouse

    Geist, E.L.; Fisher, M.A.; Scholl, D. W.

    1993-01-01

    Continuum models are used to investigate the large-scale deformation associated with the subduction of aseismic ridges. Formulated in the horizontal plane using thin viscous sheet theory, these models measure the horizontal transmission of stress through the arc lithosphere accompanying ridge subduction. Modelling was used to compare the Tonga arc and Louisville ridge collision with the New Hebrides arc and d'Entrecasteaux ridge collision, which have disparate arc-ridge intersection speeds but otherwise similar characteristics. Models of both systems indicate that diffuse deformation (low values of the effective stress-strain exponent n) are required to explain the observed deformation. -from Authors

  7. A multilevel optimization of large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, M. K.

    1976-01-01

    A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.

  8. Large-Scale Purification of Peroxisomes for Preparative Applications.

    PubMed

    Cramer, Jana; Effelsberg, Daniel; Girzalsky, Wolfgang; Erdmann, Ralf

    2015-09-01

    This protocol is designed for large-scale isolation of highly purified peroxisomes from Saccharomyces cerevisiae using two consecutive density gradient centrifugations. Instructions are provided for harvesting up to 60 g of oleic acid-induced yeast cells for the preparation of spheroplasts and generation of organellar pellets (OPs) enriched in peroxisomes and mitochondria. The OPs are loaded onto eight continuous 36%-68% (w/v) sucrose gradients. After centrifugation, the peak peroxisomal fractions are determined by measurement of catalase activity. These fractions are subsequently pooled and subjected to a second density gradient centrifugation using 20%-40% (w/v) Nycodenz. PMID:26330621

  9. QuTiP: An open-source Python framework for the dynamics of open quantum systems

    NASA Astrophysics Data System (ADS)

    Johansson, J. R.; Nation, P. D.; Nori, Franco

    2012-08-01

    We present an object-oriented open-source framework for solving the dynamics of open quantum systems written in Python. Arbitrary Hamiltonians, including time-dependent systems, may be built up from operators and states defined by a quantum object class, and then passed on to a choice of master equation or Monte Carlo solvers. We give an overview of the basic structure for the framework before detailing the numerical simulation of open system dynamics. Several examples are given to illustrate the build up to a complete calculation. Finally, we measure the performance of our library against that of current implementations. The framework described here is particularly well suited to the fields of quantum optics, superconducting circuit devices, nanomechanics, and trapped ions, while also being ideal for use in classroom instruction. Catalogue identifier: AEMB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 16 482 No. of bytes in distributed program, including test data, etc.: 213 438 Distribution format: tar.gz Programming language: Python Computer: i386, x86-64 Operating system: Linux, Mac OSX, Windows RAM: 2+ Gigabytes Classification: 7 External routines: NumPy (http://numpy.scipy.org/), SciPy (http://www.scipy.org/), Matplotlib (http://matplotlib.sourceforge.net/) Nature of problem: Dynamics of open quantum systems. Solution method: Numerical solutions to Lindblad master equation or Monte Carlo wave function method. Restrictions: Problems must meet the criteria for using the master equation in Lindblad form. Running time: A few seconds up to several tens of minutes, depending on size of underlying Hilbert space.

  10. Energy Exchange in Driven Open Quantum Systems at Strong Coupling

    NASA Astrophysics Data System (ADS)

    Carrega, Matteo; Solinas, Paolo; Sassetti, Maura; Weiss, Ulrich

    2016-06-01

    The time-dependent energy transfer in a driven quantum system strongly coupled to a heat bath is studied within an influence functional approach. Exact formal expressions for the statistics of energy dissipation into the different channels are derived. The general method is applied to the driven dissipative two-state system. It is shown that the energy flows obey a balance relation, and that, for strong coupling, the interaction may constitute the major dissipative channel. Results in analytic form are presented for the particular value K =1/2 of strong Ohmic dissipation. The energy flows show interesting behaviors including driving-induced coherences and quantum stochastic resonances. It is found that the general characteristics persists for K near 1/2 .

  11. Arts Students and Quantum Theory in an Open University History of Science Course.

    ERIC Educational Resources Information Center

    Lawless, Clive

    1982-01-01

    In an open university History of Science course a unit was written to provide basic information on quantum theory for students with arts and social science background in order to enable these students to handle the Bohr-Einstein debate. An evaluation of the unit showed that it achieved its purpose. (Author/MLW)

  12. Linear-algebraic bath transformation for simulating complex open quantum systems

    NASA Astrophysics Data System (ADS)

    Huh, Joonsuk; Mostame, Sarah; Fujita, Takatoshi; Yung, Man-Hong; Aspuru-Guzik, Alán

    2014-12-01

    In studying open quantum systems, the environment is often approximated as a collection of non-interacting harmonic oscillators, a configuration also known as the star-bath model. It is also well known that the star-bath can be transformed into a nearest-neighbor interacting chain of oscillators. The chain-bath model has been widely used in renormalization group approaches. The transformation can be obtained by recursion relations or orthogonal polynomials. Based on a simple linear algebraic approach, we propose a bath partition strategy to reduce the system-bath coupling strength. As a result, the non-interacting star-bath is transformed into a set of weakly coupled multiple parallel chains. The transformed bath model allows complex problems to be practically implemented on quantum simulators, and it can also be employed in various numerical simulations of open quantum dynamics.

  13. Fast resonator reset in circuit QED using open quantum system optimal control

    NASA Astrophysics Data System (ADS)

    Boutin, Samuel; Andersen, Christian Kraglund; Venkatraman, Jayameenakshi; Blais, Alexandre

    Practical implementations of quantum information processing requires repetitive qubit readout. In circuit QED, where readout is performed using a resonator dispersively coupled to the qubits, the measurement repetition rate is limited by the resonator reset time. This reset is usually performed passively by waiting several resonator decay times. Alternatively, it was recently shown that a simple pulse sequence allows to decrease the reset time to twice the resonator decay time. In this work, we show how to further optimize the ring-down pulse sequence by using optimal control theory for open quantum systems. Using a new implementation of the open GRAPE algorithm that is well suited to large Hilbert spaces, we find active resonator reset procedures that are faster than a single resonator decay time. Simple quantum speed limits for this kind of active reset processes will be discussed

  14. The spectrum and properties of the scattering cross section of electrons in open spherical quantum dots

    SciTech Connect

    Tkach, N. V. Seti, Ju.

    2009-03-15

    In the effective mass approximation in the model of rectangular potentials, the scattering cross section of electrons in an open spherical quantum dot is calculated for the first time. It is shown that, for such a nanosystem with a barrier of several monolayers, the experimental measurements of the scattering cross section allow adequate identification of the resonance energies and the widths of resonance states in the low-energy region of the quasi-stationary electron spectrum. It is also shown that, for an open spherical quantum dot with a low-strength potential barrier, the adequate spectral parameters of the quasi-stationary spectrum are the generalized resonance energies and widths determined via the probability of an electron being inside the quantum dot.

  15. Large-scale Direct Targeting for Drug Repositioning and Discovery.

    PubMed

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug's affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery.

  16. Strong CP Violation in Large Scale Magnetic Fields

    SciTech Connect

    Faccioli, P.; Millo, R.

    2007-11-19

    We explore the possibility of improving on the present experimental bounds on Strong CP violation, by studying processes in which the smallness of {theta} is compensated by the presence of some other very large scale. In particular, we study the response of the {theta} vacuum to large-scale magnetic fields, whose correlation lengths can be as large as the size of galaxy clusters. We find that, if strong interactions break CP, an external magnetic field would induce an electric vacuum polarization along the same direction. As a consequence, u,d-bar and d,u-bar quarks would accumulate in the opposite regions of the space, giving raise to an electric dipole moment. We estimate the magnitude of this effect both at T = 0 and for 0

  17. Alignment of quasar polarizations with large-scale structures

    NASA Astrophysics Data System (ADS)

    Hutsemékers, D.; Braibant, L.; Pelgrims, V.; Sluse, D.

    2014-12-01

    We have measured the optical linear polarization of quasars belonging to Gpc scale quasar groups at redshift z ~ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is on the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel to their host large-scale structures. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID 092.A-0221.Table 1 is available in electronic form at http://www.aanda.org

  18. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  19. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  20. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  1. Large scale floodplain mapping using a hydrogeomorphic method

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Yan, K.; Di Baldassarre, G.; Grimaldi, S.

    2013-12-01

    Floodplain landforms are clearly distinguishable as respect to adjacent hillslopes being the trace of severe floods that shaped the terrain. As a result digital topography intrinsically contains the floodplain information, this works presents the results of the application of a DEM-based large scale hydrogeomorphic floodplain delineation method. The proposed approach, based on the integration of terrain analysis algorithms in a GIS framework, automatically identifies the potentially frequently saturated zones of riparian areas by analysing the maximum flood flow heights associated to stream network nodes as respect to surrounding uplands. Flow heights are estimated by imposing a Leopold's law that scales with the contributing area. Presented case studies include the floodplain map of large river basins for the entire Italian territory , that are also used for calibrating the Leopold scaling parameters, as well as additional large international river basins in different climatic and geomorphic characteristics posing the base for the use of such approach for global floodplain mapping. The proposed tool could be useful to detect the hydrological change since it can easily provide maps to verify the flood impact on human activities and vice versa how the human activities changed in floodplain areas at large scale.

  2. Large-scale network-level processes during entrainment

    PubMed Central

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-01-01

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  3. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  4. Large-scale network-level processes during entrainment.

    PubMed

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-03-15

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4-30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band "disconnecting" visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  5. Exploring Cloud Computing for Large-scale Scientific Applications

    SciTech Connect

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  6. Very sparse LSSVM reductions for large-scale data.

    PubMed

    Mall, Raghvendra; Suykens, Johan A K

    2015-05-01

    Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nyström approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of L0 -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.

  7. Large-scale anisotropy in stably stratified rotating flows.

    PubMed

    Marino, R; Mininni, P D; Rosenberg, D L; Pouquet, A

    2014-08-01

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to 1024(3) grid points and Reynolds numbers of ≈1000. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the kinetic energy displays a perpendicular (horizontal) spectrum with power-law behavior compatible with ∼k(⊥)(-5/3), including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  8. Large-scale anisotropy in stably stratified rotating flows

    SciTech Connect

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  9. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGES

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  10. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  11. Large-scale magnetic fields in magnetohydrodynamic turbulence.

    PubMed

    Alexakis, Alexandros

    2013-02-22

    High Reynolds number magnetohydrodynamic turbulence in the presence of zero-flux large-scale magnetic fields is investigated as a function of the magnetic field strength. For a variety of flow configurations, the energy dissipation rate [symbol: see text] follows the scaling [Symbol: see text] proportional U(rms)(3)/ℓ even when the large-scale magnetic field energy is twenty times larger than the kinetic energy. A further increase of the magnetic energy showed a transition to the [Symbol: see text] proportional U(rms)(2) B(rms)/ℓ scaling implying that magnetic shear becomes more efficient at this point at cascading the energy than the velocity fluctuations. Strongly helical configurations form nonturbulent helicity condensates that deviate from these scalings. Weak turbulence scaling was absent from the investigation. Finally, the magnetic energy spectra support the Kolmogorov spectrum k(-5/3) while kinetic energy spectra are closer to the Iroshnikov-Kraichnan spectrum k(-3/2) as observed in the solar wind.

  12. The combustion behavior of large scale lithium titanate battery.

    PubMed

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(Ni(x)Co(y)Mn(z))O2/Li(4)Ti(5)O(12) batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(Ni(x)Co(y)Mn(z))O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121 °C on anode tab and 139 to 147 °C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li(+) distribution are the main causes that lead to the difference. PMID:25586064

  13. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  14. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  15. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  16. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  17. The combustion behavior of large scale lithium titanate battery

    PubMed Central

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  18. Large scale reconstruction of the solar coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Amari, T.; Aly, J.-J.; Chopin, P.; Canou, A.; Mikic, Z.

    2014-10-01

    It is now becoming necessary to access the global magnetic structure of the solar low corona at a large scale in order to understand its physics and more particularly the conditions of energization of the magnetic fields and the multiple connections between distant active regions (ARs) which may trigger eruptive events in an almost coordinated way. Various vector magnetographs, either on board spacecraft or ground-based, currently allow to obtain vector synoptic maps, composite magnetograms made of multiple interactive ARs, and full disk magnetograms. We present a method recently developed for reconstructing the global solar coronal magnetic field as a nonlinear force-free magnetic field in spherical geometry, generalizing our previous results in Cartesian geometry. This method is implemented in the new code XTRAPOLS, which thus appears as an extension of our active region scale code XTRAPOL. We apply our method by performing a reconstruction at a specific time for which we dispose of a set of composite data constituted of a vector magnetogram provided by SDO/HMI, embedded in a larger full disk vector magnetogram provided by the same instrument, finally embedded in a synoptic map provided by SOLIS. It turns out to be possible to access the large scale structure of the corona and its energetic contents, and also the AR scale, at which we recover the presence of a twisted flux rope in equilibrium.

  19. THE LARGE-SCALE MAGNETIC FIELDS OF THIN ACCRETION DISKS

    SciTech Connect

    Cao Xinwu; Spruit, Hendrik C. E-mail: henk@mpa-garching.mpg.de

    2013-03-10

    Large-scale magnetic field threading an accretion disk is a key ingredient in the jet formation model. The most attractive scenario for the origin of such a large-scale field is the advection of the field by the gas in the accretion disk from the interstellar medium or a companion star. However, it is realized that outward diffusion of the accreted field is fast compared with the inward accretion velocity in a geometrically thin accretion disk if the value of the Prandtl number P{sub m} is around unity. In this work, we revisit this problem considering the angular momentum of the disk to be removed predominantly by the magnetically driven outflows. The radial velocity of the disk is significantly increased due to the presence of the outflows. Using a simplified model for the vertical disk structure, we find that even moderately weak fields can cause sufficient angular momentum loss via a magnetic wind to balance outward diffusion. There are two equilibrium points, one at low field strengths corresponding to a plasma-beta at the midplane of order several hundred, and one for strong accreted fields, {beta} {approx} 1. We surmise that the first is relevant for the accretion of weak, possibly external, fields through the outer parts of the disk, while the latter one could explain the tendency, observed in full three-dimensional numerical simulations, of strong flux bundles at the centers of disk to stay confined in spite of strong magnetororational instability turbulence surrounding them.

  20. Online education in a large scale rehabilitation institution.

    PubMed

    Mazzoleni, M Cristina; Rognoni, Carla; Pagani, Marco; Imbriani, Marcello

    2012-01-01

    Large scale multiple venue institutions face problems when delivering educations to their healthcare staff. The present study is aimed at evaluating the feasibility of relying on e-learning for at least part of the training of the Salvatore Maugeri Foundation healthcare staff. The paper reports the results of the delivery of e-learning courses to the personnel during a span of time of 7 months in order to assess the attitude to online courses attendance, the proportion between administered online education and administered traditional education, the economic sustainability of the online education delivery process. 37% of the total healthcare staff have attended online courses and 46% of nurses have proved to be the very active. The ratio between total number of credits and total number of courses for online and traditional education are respectively 18268/5 and 20354/96. These results point out that eLearning is not at all a niche tool used (or usable) by a limited number of people. Economic sustainability, assessed via personnel work hour saving, has been demonstrated. When distance learning is appropriate, online education is an effective, sustainable, well accepted mean to support and promote healthcare staff's education in a large scale institution. PMID:22491113

  1. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  2. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  3. Large-scale Direct Targeting for Drug Repositioning and Discovery

    PubMed Central

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug’s affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery. PMID:26155766

  4. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    PubMed Central

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-01-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing. PMID:27072067

  5. Extending large-scale forest inventories to assess urban forests.

    PubMed

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  6. The combustion behavior of large scale lithium titanate battery.

    PubMed

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-14

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(Ni(x)Co(y)Mn(z))O2/Li(4)Ti(5)O(12) batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(Ni(x)Co(y)Mn(z))O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121 °C on anode tab and 139 to 147 °C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li(+) distribution are the main causes that lead to the difference.

  7. The combustion behavior of large scale lithium titanate battery

    NASA Astrophysics Data System (ADS)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference.

  8. The effective field theory of cosmological large scale structures

    SciTech Connect

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  9. [Privacy and public benefit in using large scale health databases].

    PubMed

    Yamamoto, Ryuichi

    2014-01-01

    In Japan, large scale heath databases were constructed in a few years, such as National Claim insurance and health checkup database (NDB) and Japanese Sentinel project. But there are some legal issues for making adequate balance between privacy and public benefit by using such databases. NDB is carried based on the act for elderly person's health care but in this act, nothing is mentioned for using this database for general public benefit. Therefore researchers who use this database are forced to pay much concern about anonymization and information security that may disturb the research work itself. Japanese Sentinel project is a national project to detecting drug adverse reaction using large scale distributed clinical databases of large hospitals. Although patients give the future consent for general such purpose for public good, it is still under discussion using insufficiently anonymized data. Generally speaking, researchers of study for public benefit will not infringe patient's privacy, but vague and complex requirements of legislation about personal data protection may disturb the researches. Medical science does not progress without using clinical information, therefore the adequate legislation that is simple and clear for both researchers and patients is strongly required. In Japan, the specific act for balancing privacy and public benefit is now under discussion. The author recommended the researchers including the field of pharmacology should pay attention to, participate in the discussion of, and make suggestion to such act or regulations.

  10. Power suppression at large scales in string inflation

    SciTech Connect

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar E-mail: sddownes@physics.tamu.edu

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  11. NIC-based Reduction Algorithms for Large-scale Clusters

    SciTech Connect

    Petrini, F; Moody, A T; Fernandez, J; Frachtenberg, E; Panda, D K

    2004-07-30

    Efficient algorithms for reduction operations across a group of processes are crucial for good performance in many large-scale, parallel scientific applications. While previous algorithms limit processing to the host CPU, we utilize the programmable processors and local memory available on modern cluster network interface cards (NICs) to explore a new dimension in the design of reduction algorithms. In this paper, we present the benefits and challenges, design issues and solutions, analytical models, and experimental evaluations of a family of NIC-based reduction algorithms. Performance and scalability evaluations were conducted on the ASCI Linux Cluster (ALC), a 960-node, 1920-processor machine at Lawrence Livermore National Laboratory, which uses the Quadrics QsNet interconnect. We find NIC-based reductions on modern interconnects to be more efficient than host-based implementations in both scalability and consistency. In particular, at large-scale--1812 processes--NIC-based reductions of small integer and floating-point arrays provided respective speedups of 121% and 39% over the host-based, production-level MPI implementation.

  12. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  13. Halo detection via large-scale Bayesian inference

    NASA Astrophysics Data System (ADS)

    Merson, Alexander I.; Jasche, Jens; Abdalla, Filipe B.; Lahav, Ofer; Wandelt, Benjamin; Jones, D. Heath; Colless, Matthew

    2016-08-01

    We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect haloes of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HAmiltonian Density Estimation and Sampling algorithm (HADES), and a Bayesian chain rule (the Blackwell-Rao estimator), which we use to connect the inferred density field to the properties of dark matter haloes. To demonstrate the capability of our approach, we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information, we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful detection of haloes in the mock catalogue increases as a function of the signal to noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.

  14. Large scale CMB anomalies from thawing cosmic strings

    NASA Astrophysics Data System (ADS)

    Ringeval, Christophe; Yamauchi, Daisuke; Yokoyama, Jun'ichi; Bouchet, François R.

    2016-02-01

    Cosmic strings formed during inflation are expected to be either diluted over super-Hubble distances, i.e., invisible today, or to have crossed our past light cone very recently. We discuss the latter situation in which a few strings imprint their signature in the Cosmic Microwave Background (CMB) Anisotropies after recombination. Being almost frozen in the Hubble flow, these strings are quasi static and evade almost all of the previously derived constraints on their tension while being able to source large scale anisotropies in the CMB sky. Using a local variance estimator on thousand of numerically simulated Nambu-Goto all sky maps, we compute the expected signal and show that it can mimic a dipole modulation at large angular scales while being negligible at small angles. Interestingly, such a scenario generically produces one cold spot from the thawing of a cosmic string loop. Mixed with anisotropies of inflationary origin, we find that a few strings of tension GU = Script O(1) × 10-6 match the amplitude of the dipole modulation reported in the Planck satellite measurements and could be at the origin of other large scale anomalies.

  15. Very large-scale motions in a turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Lee, Jae Hwa; Jang, Seong Jae; Sung, Hyung Jin

    2011-11-01

    Direct numerical simulation of a turbulent pipe flow with ReD=35000 was performed to investigate the spatially coherent structures associated with very large-scale motions. The corresponding friction Reynolds number, based on pipe radius R, is R+=934, and the computational domain length is 30 R. The computed mean flow statistics agree well with previous DNS data at ReD=44000 and 24000. Inspection of the instantaneous fields and two-point correlation of the streamwise velocity fluctuations showed that the very long meandering motions exceeding 25R exist in logarithmic and wake regions, and the streamwise length scale is almost linearly increased up to y/R ~0.3, while the structures in the turbulent boundary layer only reach up to the edge of the log-layer. Time-resolved instantaneous fields revealed that the hairpin packet-like structures grow with continuous stretching along the streamwise direction and create the very large-scale structures with meandering in the spanwise direction, consistent with the previous conceptual model of Kim & Adrian (1999). This work was supported by the Creative Research Initiatives of NRF/MEST of Korea (No. 2011-0000423).

  16. IP over optical multicasting for large-scale video delivery

    NASA Astrophysics Data System (ADS)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  17. A study of synthetic large scales in turbulent boundary layers

    NASA Astrophysics Data System (ADS)

    Duvvuri, Subrahmanyam; Luhar, Mitul; Barnard, Casey; Sheplak, Mark; McKeon, Beverley

    2013-11-01

    Synthetic spanwise-constant spatio-temporal disturbances are excited in a turbulent boundary layer through a spatially impulsive patch of dynamic wall-roughness. The downstream flow response is studied through hot wire anemometry, pressure measurements at the wall and direct measurements of wall-shear-stress made using a novel micro-machined capacitive floating element sensor. These measurements are phase-locked to the input perturbation to recover the synthetic large-scale motion and characterize its structure and wall signature. The phase relationship between the synthetic large scale and small scale activity provides further insights into the apparent amplitude modulation effect between them, and the dynamics of wall-bounded turbulent flows in general. Results from these experiments will be discussed in the context of the critical-layer behavior revealed by the resolvent analysis of McKeon & Sharma (J Fluid Mech, 2010), and compared with similar earlier work by Jacobi & McKeon (J Fluid Mech, 2011). Model predictions are shown to be in broad agreement with experiments. The support of AFOSR grant #FA 9550-12-1-0469, Resnick Institute Graduate Research Fellowship (S.D.) and Sandia Graduate Fellowship (C.B.) are gratefully acknowledged.

  18. Large-scale mapping of mutations affecting zebrafish development

    PubMed Central

    Geisler, Robert; Rauch, Gerd-Jörg; Geiger-Rudolph, Silke; Albrecht, Andrea; van Bebber, Frauke; Berger, Andrea; Busch-Nentwich, Elisabeth; Dahm, Ralf; Dekens, Marcus PS; Dooley, Christopher; Elli, Alexandra F; Gehring, Ines; Geiger, Horst; Geisler, Maria; Glaser, Stefanie; Holley, Scott; Huber, Matthias; Kerr, Andy; Kirn, Anette; Knirsch, Martina; Konantz, Martina; Küchler, Axel M; Maderspacher, Florian; Neuhauss, Stephan C; Nicolson, Teresa; Ober, Elke A; Praeg, Elke; Ray, Russell; Rentzsch, Brit; Rick, Jens M; Rief, Eva; Schauerte, Heike E; Schepp, Carsten P; Schönberger, Ulrike; Schonthaler, Helia B; Seiler, Christoph; Sidi, Samuel; Söllner, Christian; Wehner, Anja; Weiler, Christian; Nüsslein-Volhard, Christiane

    2007-01-01

    Background Large-scale mutagenesis screens in the zebrafish employing the mutagen ENU have isolated several hundred mutant loci that represent putative developmental control genes. In order to realize the potential of such screens, systematic genetic mapping of the mutations is necessary. Here we report on a large-scale effort to map the mutations generated in mutagenesis screening at the Max Planck Institute for Developmental Biology by genome scanning with microsatellite markers. Results We have selected a set of microsatellite markers and developed methods and scoring criteria suitable for efficient, high-throughput genome scanning. We have used these methods to successfully obtain a rough map position for 319 mutant loci from the Tübingen I mutagenesis screen and subsequent screening of the mutant collection. For 277 of these the corresponding gene is not yet identified. Mapping was successful for 80 % of the tested loci. By comparing 21 mutation and gene positions of cloned mutations we have validated the correctness of our linkage group assignments and estimated the standard error of our map positions to be approximately 6 cM. Conclusion By obtaining rough map positions for over 300 zebrafish loci with developmental phenotypes, we have generated a dataset that will be useful not only for cloning of the affected genes, but also to suggest allelism of mutations with similar phenotypes that will be identified in future screens. Furthermore this work validates the usefulness of our methodology for rapid, systematic and inexpensive microsatellite mapping of zebrafish mutations. PMID:17212827

  19. Large-scale climatic control on European precipitation

    NASA Astrophysics Data System (ADS)

    Lavers, David; Prudhomme, Christel; Hannah, David

    2010-05-01

    Precipitation variability has a significant impact on society. Sectors such as agriculture and water resources management are reliant on predictable and reliable precipitation supply with extreme variability having potentially adverse socio-economic impacts. Therefore, understanding the climate drivers of precipitation is of human relevance. This research examines the strength, location and seasonality of links between precipitation and large-scale Mean Sea Level Pressure (MSLP) fields across Europe. In particular, we aim to evaluate whether European precipitation is correlated with the same atmospheric circulation patterns or if there is a strong spatial and/or seasonal variation in the strength and location of centres of correlations. The work exploits time series of gridded ERA-40 MSLP on a 2.5˚×2.5˚ grid (0˚N-90˚N and 90˚W-90˚E) and gridded European precipitation from the Ensemble project on a 0.5°×0.5° grid (36.25˚N-74.25˚N and 10.25˚W-24.75˚E). Monthly Spearman rank correlation analysis was performed between MSLP and precipitation. During winter, a significant MSLP-precipitation correlation dipole pattern exists across Europe. Strong negative (positive) correlation located near the Icelandic Low and positive (negative) correlation near the Azores High pressure centres are found in northern (southern) Europe. These correlation dipoles resemble the structure of the North Atlantic Oscillation (NAO). The reversal in the correlation dipole patterns occurs at the latitude of central France, with regions to the north (British Isles, northern France, Scandinavia) having a positive relationship with the NAO, and regions to the south (Italy, Portugal, southern France, Spain) exhibiting a negative relationship with the NAO. In the lee of mountain ranges of eastern Britain and central Sweden, correlation with North Atlantic MSLP is reduced, reflecting a reduced influence of westerly flow on precipitation generation as the mountains act as a barrier to moist

  20. Nearly incompressible fluids: hydrodynamics and large scale inhomogeneity.

    PubMed

    Hunana, P; Zank, G P; Shaikh, D

    2006-08-01

    A system of hydrodynamic equations in the presence of large-scale inhomogeneities for a high plasma beta solar wind is derived. The theory is derived under the assumption of low turbulent Mach number and is developed for the flows where the usual incompressible description is not satisfactory and a full compressible treatment is too complex for any analytical studies. When the effects of compressibility are incorporated only weakly, a new description, referred to as "nearly incompressible hydrodynamics," is obtained. The nearly incompressible theory, was originally applied to homogeneous flows. However, large-scale gradients in density, pressure, temperature, etc., are typical in the solar wind and it was unclear how inhomogeneities would affect the usual incompressible and nearly incompressible descriptions. In the homogeneous case, the lowest order expansion of the fully compressible equations leads to the usual incompressible equations, followed at higher orders by the nearly incompressible equations, as introduced by Zank and Matthaeus. With this work we show that the inclusion of large-scale inhomogeneities (in this case time-independent and radially symmetric background solar wind) modifies the leading-order incompressible description of solar wind flow. We find, for example, that the divergence of velocity fluctuations is nonsolenoidal and that density fluctuations can be described to leading order as a passive scalar. Locally (for small lengthscales), this system of equations converges to the usual incompressible equations and we therefore use the term "locally incompressible" to describe the equations. This term should be distinguished from the term "nearly incompressible," which is reserved for higher-order corrections. Furthermore, we find that density fluctuations scale with Mach number linearly, in contrast to the original homogeneous nearly incompressible theory, in which density fluctuations scale with the square of Mach number. Inhomogeneous nearly